text
stringlengths 198
630k
| id
stringlengths 47
47
| metadata
dict |
---|---|---|
Skip to 0 minutes and 10 secondsWhen writing code, it is quite important to keep in mind the entire life cycle of your project. Typically for scientific software things start out small, it's just a quick hack to solve a single problem. So, good quality is not that important. But of course, over time your code base will grow. Also, it will get more complex. More importantly, others will start using your code, either just to verify your results or even as the basis for their own research as building blocks.
Skip to 0 minutes and 49 secondsNo doubt, you've had the following experience: You open a source file, you stare at it for minutes and you think, "What the heck?" Clearly, the author of that piece of code really failed to convey his intentions to you. Perhaps that can serve as motivation to try and create a clean code that is easy to read.
Skip to 1 minute and 21 secondsPerhaps you can think of coding as storytelling. It's not just about telling a piece of hardware what to do, it's actually more like writing a novel. So, it should be pleasant to read. We'll learn best practices to write clean code that's easy to understand. When you're using software, documentation is also quite important. Good quality documentation makes using software a lot less problematic and so, having documentation as a part of your release is quite important. Documentation can be written at various levels. For instance, at a level of your functions, classes and methods. For that API documentation you will learn about Doxygen, a tool that really generates beautifully looking documentation.
Skip to 2 minutes and 21 secondsAt the level of the application as a whole you will also require documentation, a manual, if you will. And you'll learn about MkDocs which will generate that for you. Of course, you'll also learn about best practices and do's and dont's in writing documentation.
Introduction to Week 1
Week 1 learning goals
During this week, you’ll learn about:
- Code style,
- Best coding practices,
- Error handling,
- Writing API documentation,
- Writing application documentation. | <urn:uuid:d1fd1389-5bb8-49d9-a4b2-78063acef43f> | {
"date": "2019-12-12T15:07:31",
"dump": "CC-MAIN-2019-51",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540543850.90/warc/CC-MAIN-20191212130009-20191212154009-00216.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9569125771522522,
"score": 2.625,
"token_count": 441,
"url": "https://www.futurelearn.com/courses/defensive-programming-and-debugging/0/steps/51526"
} |
The White House has wholeheartedly embraced yoga as a worthy physical activity and a possible solution to the childhood obesity epidemic in America.
Both First Lady Michelle Obama and Lady Gaga are avid yoga enthusiasts, and here are the 3 things that they want your child to know about yoga.
- You can get a medal from the President for doing yoga
- Yoga can help you get better grades in school
- Yoga improves your health and overall well-being.
The Presidential Active Lifestyle Award (PALA) challenges children to commit to a daily physical activity, including yoga, for 6 out of 8 weeks to help them get off the couch and to receive a medal from the President as well.
The rates of childhood obesity and inactivity in the United States are dramatic. According to the Center for Disease Control Prevention, childhood obesity has more than doubled over the past thirty years.
This increase correlates with the boom of computers and video games since the 1980s. Before that, kids were playing outside sports and burning calories instead of sitting inside playing sports video games on the computer.
We have a generation of youth that has to make an effort to be physically active. Children of all ages and sizes and varying degrees of athletic ability can practice yoga, a non-competitive form of exercise endorsed by the White House.
Secondly, yoga increases the ability to concentrate and focus, helps kids feel empowered, and helps them stay calm (even kids with ADD or ADHD). All told, this translates into better performance in school.
Achieving that mind-body connection allows children to maintain their cool while coping with anything from tests to peer pressure.
Lastly, yoga improves the health and overall well-being of children.
Our nation, including our youth, is under siege by obesity and diseases like diabetes that stem from a poor diet and lack of exercise. Yoga for kids will create good habits and a foundation for well-being.
Jane Young, Principal of Preschool – 4th Grade at St. Matthew’s Parish School in Pacific Palisades, California states “I see our young children benefiting from the engaging activity of yoga. Practicing simple yoga poses supports and enhances learning, health, and personal responsibility.”
Both Michelle Obama and Lady Gaga agree that all that stretching, bending, breathing, pushing, pulling, twisting and turning will improve the overall health of children and set them on the right path for life. | <urn:uuid:c45e85f8-d115-49f8-a473-d929a5a2efd4> | {
"date": "2019-08-22T19:52:18",
"dump": "CC-MAIN-2019-35",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027317359.75/warc/CC-MAIN-20190822194105-20190822220105-00456.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9540109634399414,
"score": 2.71875,
"token_count": 491,
"url": "https://usedview.com/3-things-michelle-obama-lady-gaga-want-your-child-to-know-about-yoga/"
} |
The Reichsbank, 1924-1941
August 3, 2014
Here’s something you don’t see everyday: a look at Germany’s Reichsbank during the 1920s and 1930s. The German mark was nominally stable vs. gold throughout the interwar period, but that was achieved with heavy capital controls especially after 1931 I think.
This continues our look at some major central banks during the 1914-1941 period.
July 27, 2014: The Bank of France, 1914-1941
July 20, 2014: The Bank of England, 1914-1941
January 26, 2014: The Federal Reserve in the 1930s #2: Interest Rates
January 19, 2014: The Federal Reserve in the 1930s
July 18, 2014: Foreign Exchange Rates 1913-1941 #8: A Brief Summary
December 23, 2012: The Federal Reserve in the 1920s 4: The Historical Record
December 16, 2012: The Federal Reserve in the 1920s 3: Balance Sheet and Base Money
November 25, 2012: The Federal Reserve in the 1920s 2: Interest Rates
November 18, 2012: The Federal Reserve in the 1920s
The source of our data is the Federal Reserve Banking and Monetary Statistics, 1914-1941.
After the famous hyperinflation in the early 1920s, Germany maintained the mark’s link to gold until WWII, at least in a nominal or official basis. The 1923 gold parity was the same as the pre-WWI parity, although it didn’t have to be. Just tradition.
We see here a big drop in gold bullion holdings after 1931, and of course a giant rise later in government Treasury bills. It looks like there was some printing-press finance for the war beginning around 1938 or so.
Liabilities consisted mostly of banknotes, with a little bit of deposits (bank reserves).
Let’s look at just the period to 1937, skipping for now the big rise toward the end of the 1930s.
Again, we see that big drop in gold holdings. However, the fact that it continues until 1934 indicates that it was still possible to somehow acquire gold; or, perhaps the government itself was using it to obtain foreign imports.
Base money makes a big contraction beginning in 1931. There was a lot going on around that time, including some bankruptcies of big banks and also major sovereign defaults. Germany had a major sovereign default in 1932; I think Austria was in 1931. I would not consider this reduction in base money supply “contractionary” unless the value of the mark rose; but there was hardly any reason for that, and the mark was actually stable in value. So, it was a reduction in supply in reaction to a reduction in demand, and there certainly were a lot of reasons that people would perhaps not really want to hold marks. This reduction in supply, in the context of maintaining a gold parity value, doesn’t really have many broader effects, although short-term interest rates can be affected for a short while.
What if, for example, the Bank of France asked for its mark-denominated assets (if indeed it had any; most foreign holdings were British pound and U.S. dollar assets I think) to be redeemed in gold bullion at the Reichsbank? Actually, a government bond is not redeemable for gold under a gold standard system; only the liabilities of the currency manager (Reichsbank), which are basically banknotes and domestic commercial bank deposits at the Reichsbank, or base money. But, let’s say that the Bank of France sold the German government bonds for cash (actually a bank deposit at a commercial bank, which is a form of debt liability of the commercial bank), and then asked the commercial bank to redeem that for gold. The commercial bank could default, in essence saying that it would not pay back its deposit in the form of base money. However, let’s say that, one way or another, the commercial bank does ask for its base money (bank deposits) at the Reichsbank to be redeemed in gold. The Reichsbank could, conceivably, refuse to do so, arguing (plausibly) that it would be overly disruptive given the very large size of the order, or perhaps demand that it be spread out over some time period. The Bank of France is not without options here; they can just sell their bonds for cash, and then use the cash to buy gold bullion on the open market (from miners for example, or anyone wishing to sell gold bullion), thus accomplishing the same thing as if they had acquired gold bullion directly from the Reichsbank. They could even engage in forward contracts with the miners, in essence buying future production. The main reason they might not do this is if the market price of gold in marks was higher than the parity value; let’s say, 55 marks/ounce instead of parity at 50. In that case, the mark is weak compared to its gold parity, or that marks are oversupplied, and thus that a reduction in base money supply (resulting from gold redemption) is exactly what was needed to return the mark to its gold parity value.
But, let’s say that the Reichsbank does indeed deliver gold to the commercial bank, and the commercial bank does indeed deliver gold to the Bank of France, and that the German base money supply does indeed contract by the amount of the gold redemption, at least in the first instance. Actually, that did NOT happen, as the increase in Treasury bills effectively offset the initial outflow of gold. But, in this scenario, one of two things might happen: the value of the mark might not rise, in which case the gold redemption represented a genuine net reduction in demand, and the reduction in base money supply was exactly correct to accommodate this decrease in demand. Or, the value of the mark might rise, in which case arbitrageurs would step in. Let’s say the mark/gold parity was 50 marks/ounce, and the market value of the mark was 45 marks/ounce. Arbitrageurs would come in and buy gold with marks at 45 marks/ounce in the open market, and then give the gold to the Reichsbank and receive 50 marks in return. The Reichsbank’s assets would thus expand by one ounce of gold bullion and base money would expand by 50 marks. Arbitrageurs would make 5 marks of risk-free profit. This would continue until there was no more profit to be made; in other words, the market value of the mark was at the gold parity of 50 marks/ounce. Or, the value of the mark might rise against some other currency (the US dollar) by some little margin, in which case the Reichsbank could step in and buy Treasury bills in the open market to expand the base money supply, and reduce the value of the mark until it returned to its official gold parity (and implied foreign exchange rate). Or, the Reichsbank could buy dollars and sell marks, and the asset side of the balance sheet would ultimately reflect an increase not in German government bills, but U.S. government bills, which would be recorded as an increase in “foreign exchange.” Another potential process is that, as the commercial bank requires more base money to meet deposit redemptions into gold bullion, it must effectively borrow these bank reserves (note the rather low amount of bank reserves throughout this time period). This could cause an increase in short-term interest rates; and in any case, the bank can then discount bills at the Reichsbank, thus increasing the Reichsbank’s bill holdings. Under the principles of “19th century central banking,” these bank loans could be “short-term, self-cancelling loans,” allowed to roll off as the bills mature. Thus, base money naturally contracts again, but spread out over a longer time period. The ultimate result is that the effects of a sudden gold redemption are spread out. If the effect of the rolloff of bills is that base money is insufficient, then gold inflows increase to offset the bill rolloff. Or, if the contraction of base money is warranted over a longer time period, the bills roll off but the value of the mark does not rise sufficiently to cause bullion inflows. You can see that there are a lot of mechanisms in place that would self-adjust to keep the supply of base money exactly what it should be to maintain the gold parity, no matter what the Bank of France or others might do.
Actually, the proximate cause of the decline in base money beginning in 1931 was not the decrease in gold bullion, as that was immediately offset by an increase in holdings of Treasury bills. Rather, it was the decline in holdings of Treasury bills thereafter. Why should Treasury bills decline? Since these can only increase or decrease via open-market operations (assuming automatic rollovers of maturing bills), changes are thus due to the decisions of the Reichsbank itself, not the Bank of France or anyone else. Probably, they were reacting to weakness of the mark in the forex market and also compared to the gold parity, shrinking the monetary base in response exactly as is required to maintain the gold parity value for the mark. In other words, the contraction in base money 1931-1933 most probably represent adjustments necessary to maintain the mark at its parity value in response to declining demand for marks.
If this is confusing, then please see my book Gold: the Monetary Polaris, where these scenarios are described in detail. Indeed, the decline in base money from about 5 billion marks to about 4 billion marks is exactly the kind of “20% reduction in base money supply” that I mention is usually about right to react to a currency crisis. What a coincidence. Not really–it’s just that I’ve seen this before, and make the example of Russia in 2009 in my book. A similar reduction in base money supply (probably less than 20% in those cases) would have allowed Britain to avoid devaluation in 1931, or the U.S. in 1971.
We can see from the value of the mark vs. gold at the top (actually the value of the mark vs. the dollar, translated into gold), that the mark indeed had a sag in 1931 below its parity value, and that this sag was indeed corrected. The “sag” in 1933 might just be a factor of translating the then-declining value of the dollar vs. gold into a mark/gold value, a complicated way of saying it is perhaps statistical noise that should be ignored.
In any case, unlike the central banks of the U.S., Britain and France, where base money was largely unchanged through the 1929-1935 period, the Reichsbank’s base money supply had some ups and downs.
This was quite an eventful time, and a review of historical works would be necessary to get an idea of all the things that were going on then. Simultaneous bank and sovereign default has a way of making a mess of everything. | <urn:uuid:9f4c6166-d86a-4ef8-94be-eb11d6494189> | {
"date": "2019-09-19T18:53:51",
"dump": "CC-MAIN-2019-39",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514573570.6/warc/CC-MAIN-20190919183843-20190919205843-00376.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9636273980140686,
"score": 2.890625,
"token_count": 2298,
"url": "http://devv.newworldeconomics.com/the-reichsbank-1924-1941/"
} |
Scientists Attempt to Design a Robotic Octopus [Video]
The Scuola Superiore Sant’Anna University, Pisa, Italy in conjunction with groups from Switzerland, Israel, Greece, and Italy have embarked on a research project to completely replicate the plasticity, skill, and agility of a live octopus. The below video is pretty eerie in its similarity to octopus arm movement:
Currently as the above YouTube video shows, they have been able to replicate one arm of an octopus in its dexterity as well as its suckers. They have still to make the seven other arms work in conjunction with each other; However they're only two years into their four year project.
We look forward to seeing how this project progresses! | <urn:uuid:2573a596-c27c-4405-8b24-bb6479626e04> | {
"date": "2017-02-23T04:52:40",
"dump": "CC-MAIN-2017-09",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171078.90/warc/CC-MAIN-20170219104611-00628-ip-10-171-10-108.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9495418667793274,
"score": 2.890625,
"token_count": 151,
"url": "http://www.advancedaquarist.com/blog/scientists-attempt-to-design-a-robotic-octopus-video"
} |
August 07, 2012 // Category: Fitness Advisor
Many people confuse the terms “impact” and “intensity” when it comes to workouts. “Impact” refers to the force of your body used in a particular exercise, while “intensity” refers to the level of difficulty, focus and your power.
High impact exercises include running, jogging, plyometrics (jumping) and other workouts where the body is making contact with, or pounding, the ground. Low impact exercises typically mean that one foot stays in contact with the ground, such as walking, climbing, riding a bike or pedaling the elliptical.
Since high impact exercises tend to put more stress on the joints – particularly ankles, knees, hips and backs – the good news is that low impact does not mean low intensity.
Follow this advice to find a low impact/high intensity workout that works for you: | <urn:uuid:1d5dd96a-8ab5-498a-9b65-eefe341d66fd> | {
"date": "2015-05-24T17:00:06",
"dump": "CC-MAIN-2015-22",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207928030.83/warc/CC-MAIN-20150521113208-00049-ip-10-180-206-219.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9261081218719482,
"score": 2.8125,
"token_count": 188,
"url": "http://www.lifefitness.com/blog.html?label=Low%20Intensity%20Workouts"
} |
1911 Encyclopædia Britannica/Lucretia
|←Lucre||1911 Encyclopædia Britannica, Volume 17
|See also Lucretia on Wikipedia; and our 1911 Encyclopædia Britannica disclaimer.|
LUCRETIA, a Roman lady, wife of Lucius Tarquinius Collatinus, distinguished for her beauty and domestic virtues. Having been outraged by Sextus Tarquinius, one of the sons of Tarquinius Superbus, she informed her father and her husband, and, having exacted an oath of vengeance from them, stabbed herself to death. Lucius Junius Brutus, her husband's cousin, put himself at the head of the people, drove out the Tarquins, and established a republic. The accounts of this tradition in later writers present many points of divergence.
Livy i. 57-59; Dion. Halic. iv. 64-67, 70, 82; Ovid, Fasti, ii. 721-852; Dio Cassius, frag. 11 (Bekker); G. Cornewall Lewis, Credibility of Early Roman History, i. | <urn:uuid:221aba76-7d92-44f9-b82b-a76aa0b934d7> | {
"date": "2014-04-23T18:17:24",
"dump": "CC-MAIN-2014-15",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00155-ip-10-147-4-33.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.8220688700675964,
"score": 2.6875,
"token_count": 245,
"url": "https://en.wikisource.org/wiki/1911_Encyclop%C3%A6dia_Britannica/Lucretia"
} |
The Frauenthal Center for the Performing Arts, formerly known as the Michigan Theater, was built in 1929 by Muskegon’s own movie mogul, Paul Shlossman. His trademark camel-hair coat, the way his hat tipped over one eye, and his striking demeanor were all clues to Schlossman’s colorful life as a showman.
Between 1915 and 1917, the Schlossman Company contracted architect C. Howard Crane to design three of Muskegon’s great theaters – the Rialto, Majestic and Reagent theaters. In 1920 Schlossman’s movie empire expanded when he was appointed secretary-treasurer of the Strand Amusement Company of Muskegon Heights. He ardently built the Strand Theater on Broadway in the Heights.
Along with Crane, Schlossman took a personal interest in the design of the Michigan Theater. Built as a theater for “100% all talking motion pictures,” the cost was a mere $690,000. The theater opened on September 17, 1930 receiving rave reviews from the community, as patrons were awed with its “extraordinary beauty and grace.” An advertisement in the Muskegon Chronicle proudly stated, “With the opening of the new Michigan Theater, Muskegon can boast the best in Michigan, outside of Detroit, and second to none in the United States for a town our size.”
The architectural styling of the theater is Moorish, or Spanish renaissance, and gleamed with extraordinary gold accents, cherubs and griffins. The ceiling and walls were adorned with beautiful ornamental light fixtures, the carpet and opera chairs covered with rich velour, and the stage enclosed with lipstick red draperies. Many ornate carvings, arches and intricate plasterwork abound. The ceiling is surrounded by plaster shells and comes to a large acoustic dome, or oculus, in the center. Suspended from the roof hanging by steel wires, the ceiling is not unlike the suspended acoustical ceilings with which we are all familiar.
Almost 30 years since its grand opening, the theater closed for a brief time to be refurbished. Showing wear from almost constant use, the managers at that time thought the theater needed “sprucing up.” Keeping with the tastes of the 50s, they painted over the colorful Spanish Renaissance interior with two muddy shades of beige. Upon reopening, the theater continued to operate as a movie house, even through the unfortunate demise of Schlossman’s company in the late 1960s. Then in the early 70s the Michigan Theater seemed to have an ill-fated future itself as it stood boarded up.
Fortunately a glimmer of light shined through the boards when an ad hoc group of citizens approached both the City and County of Muskegon to save the beloved theater. Finding no help from the municipalities, and with the wrecking ball ready to swing, the citizen’s group approached the Community Foundation for Muskegon County. The Foundation, small in size at that time, had received a substantial gift from a local industrialist – Mr. A. Harold Frauenthal.
Mr. Frauenthal’s wishes stated that the gift be used for the good of the community, and in 1976, the Community Foundation used the funds to purchase the entire block of West Western Avenue between Third and Fourth Streets. The block included the historic Michigan Theater, an abandoned furniture store and storage space. The work of establishing the “Frauenthal Center” had begun.
Throughout the late 70s and 80s, the Community Foundation operated the Center with a variety of entertainment, including the local symphony, theater, travelogues, movies, concerts, and outside promoter events. But, a theater designed in 1930 lacked the dressing rooms, backstage areas and support spaces needed by these presentations. The Foundation began transforming the old furniture space into what is now known as the Hilt Building which houses the 170 seat Beardsley Theater, visual arts gallery, meeting rooms, rehearsal halls, dressing rooms and reception areas.
Sooner than later, renovation was due once again in the main theater. To this end, in 1992 the Community Foundation funded the creation of a master plan. $16 million in capitol needs were identified and Muskegon County voters were asked to approve a bond issue. Through their generosity the work began in 1998.
Schlossman and Crane’s vision of a Spanish castle had been restored to the theater, along with many accessibility and safety improvements. All lighting, sound and rigging was brought up to the highest current technology and was greatly expanded. Additionally, a new two-level lobby was built connecting the foyer of the old theater with the lobby of the Hilt building. The lower level holds a 100 seat restaurant and bar and additional restrooms.
Continuing to improve the quality of life for the residents of Muskegon County, the Community Foundation for Muskegon County remains committed to the Frauenthal Center for the Performing Arts and its future development. Many exciting performances continue to grace the stage of the Frauenthal Theater.
Call Polly Doctor at 231-332-4102 to learn how you too can play an important part in the ongoing success and preservation of Muskegon’s gem and West Michigan’s grandest theater. | <urn:uuid:733e8e75-ed76-41f6-b1bc-24c6dba1f3d9> | {
"date": "2014-04-20T06:12:24",
"dump": "CC-MAIN-2014-15",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00107-ip-10-147-4-33.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9575772285461426,
"score": 2.578125,
"token_count": 1101,
"url": "http://www.frauenthal.org/?page=aboutus"
} |
Originally Posted by rlitman
Note that the units are in milligrams
per liter, and that the chart is assuming pure oxygen. Atmospheric pressure is 1 bar (at sea level), but the oxygen partial pressure is just 1/5 of that, so the solubility is 1/5th the number at 1 bar.
Assuming you have a 1000 gallon freshwater pond (salt reduces the solubility), at 32F, you could have as much as 0.27 cubic feet of dissolved oxygen. At 68F that number drops to a whopping 0.17 cubic feet dissolved in that entire 1000 gallons.
Correct. Warm water will hold less Oxygen than cold water. That is not what you implied in your post.
There is more Oxygen, by volume, in cold water (when the fish don't need it) than there is in warm water (when the fish DO need it). One of nature's conundrums. | <urn:uuid:a346e5a8-c7e6-44a3-a757-fdad935e80ca> | {
"date": "2016-02-09T05:51:30",
"dump": "CC-MAIN-2016-07",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701156520.89/warc/CC-MAIN-20160205193916-00190-ip-10-236-182-209.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9296597838401794,
"score": 2.921875,
"token_count": 194,
"url": "http://www.lawnsite.com/showpost.php?p=4656063&postcount=13"
} |
Astronaut Edwin E. Aldrin Jr., lunar module pilot, is photographed during the Apollo 11 extravehicular activity (EVA) on the lunar surface. In the right background is the lunar module. On Aldrin's right is the Solar Wind Composition (SWC) experiment, already deployed. This photograph was taken by astronaut Neil A. Armstrong, commander, with a 70mm lunar surface camera.
NASA is marking the 45th anniversary of the first moon landing this month. Here in a series of videos from the archives are some of the events of that fateful mission.
This video, featuring comments from the late Apollo 11 astronaut and research pilot Neil Armstrong, explores the contributions of the Lunar Landing Research Vehicle (LLRV) development and flight-testing at NASA's Flight Research Center, recently renamed in Armstrong's honor, to the Apollo moon-landing program.
After making the 240,000-mile journey to the moon cruising through open space, the last 300 feet down to landing represented the most difficult and dangerous part of the Apollo missions. The Apollo astronauts needed a way to practice that final descent and landing before Apollo 11 astronauts Neil Armstrong and Edwin "Buzz" Aldrin made the first historic moon landing in their lunar lander named Eagle on July 20, 1969.
In this video: Aired in July 1969 and depicts the Apollo 11 astronauts conducting several tasks during extravehicular activity (EVA) operations on the surface of the moon as well as pre-launch preparations and post launch activities and celebrations.
The story of the first Moon landing in July 1969. Depicts the principal events of the mission, from the launching through the post recovery activities of astronauts Armstrong, Aldrin, and Collins. Through television, motion pictures, and still photography, the program provides an "eyewitness" perspective of the Apollo 11 mission.
A documentary of the Apollo 11 launch, lunar landing and exploration and return to earth which included a stay in quarantine.
The Journeys of Apollo is a previously produced documentary narrated by Actor Peter Cullen that relives the 40th Apollo Anniversary and mission to explore Earths neighbor, the moon.
This documentary gives an in-depth look at the Apollo 11 mission to the moon. NASA archival footage, as well as reactions to the mission around the world, shows the enormous impact that the moon landing had.
Restored Apollo 11 EVA.
CBS Television coverage of the July 20, 1969 Apollo 11 moon landing, anchored by legendary newscaster Walter Cronkite.
A New Look at the Apollo 11 Landing Site
Apollo 11 landed on the Moon on July 20th, 1969, a little after 4:00 in the afternoon Eastern Daylight Time. The Lunar Module, nicknamed Eagle and flown by Neil Armstrong and Edwin "Buzz" Aldrin, touched down near the southern rim of the Sea of Tranquility, one of the large, dark basins that contribute to the Man in the Moon visible from Earth. Armstrong and Aldrin spent about two hours outside the LM setting up experiments and collecting samples. At one point, Armstrong ventured east of the LM to examine a small crater, dubbed Little West, that he'd flown over just before landing.
The trails of disturbed regolith created by the astronauts' boots are still clearly visible in photographs of the landing site taken by the Lunar Reconnaissance Orbiter (LRO) narrow-angle camera (LROC) more than four decades later.
LROC imagery makes it possible to visit the landing site in a whole new way by flying around a three-dimensional model of the site. LROC scientists created the digital elevation model using a stereo pair of images. Each image in the pair shows the site from a slightly different angle, allowing sophisticated software to infer the shape of the terrain, similar to the way that left and right eye views are combined in the brain to produce the perception of depth.
The animator draped an LROC photograph over the terrain model. He also added a 3D model of the LM descent stage--the real LM in the photograph looks oddly flat when viewed at an oblique angle.
Although the area around the site is relatively flat by lunar standards, West Crater (the big brother of the crater visited by Armstrong) appears in dramatic relief near the eastern edge of the terrain model. Ejecta from West comprises the boulders that Armstrong had to avoid as he searched for a safe landing site.
Apollo 11 was the first of six increasingly ambitious crewed lunar landings. The exploration of the lunar surface by the Apollo astronauts, when combined with the wealth of remote sensing data now being returned by LRO, continues to inform our understanding of our nearest neighbor in space. | <urn:uuid:cb4effb1-8d20-4758-b51a-862326db745f> | {
"date": "2014-10-30T12:13:16",
"dump": "CC-MAIN-2014-42",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1414637897717.20/warc/CC-MAIN-20141030025817-00121-ip-10-16-133-185.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.926935076713562,
"score": 3.359375,
"token_count": 951,
"url": "http://spaceref.com/missions-and-programs/nasa/video-archive-looking-back-at-apollo-11-45-years-ago.html"
} |
Adair County was established in 1851, and named for John Adair, General during the War of 1812 and 6th Governor of Kentucky. The General Assembly appointed three commissioners to locate a county seat. They selected Summerset (now Fontanelle) in 1855. The first courthouse was built a year later. Native lumber and hardware were hauled by wagon from Keokuk for the building. This building burned down in 1910.
The town of Greenfield, located near the center of the county, was laid out in 1856. From this time on, people of Greenfield fought to have the county seat moved to Greenfield. A petition was signed by 91 voters in 1858 to have it moved, but at the same time, another petition containing 137 signatures was presented to keep the seat in Summerset. | <urn:uuid:d6a70561-6361-4c60-8206-ab36f25746d2> | {
"date": "2018-10-16T04:21:10",
"dump": "CC-MAIN-2018-43",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583509996.54/warc/CC-MAIN-20181016030831-20181016052331-00016.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.989546537399292,
"score": 3.140625,
"token_count": 166,
"url": "http://www.adaircountyiowa.org/"
} |
“Courage begins by trusting oneself.” – The Clone Wars
In France people often the use the word “Courage” or “Bonne Courage” to encourage a friend or an acquaintance that is facing a challenge large or small. It is not a word which is used in the English language as an informal expression of one’s hope that another will succeed or prevail. We will rarely hear someone say to another “Be Brave” or “Have Courage” as if we were saying “Good Luck” or “Have a nice Day”. In France it is used often and is meant to remind one of the virtues of courage and its universal application in all aspects of life. It is a reminder that self trust is the root of courage.
Courage was considered by the Stoics as one of the most important virtues that a person could attain. Along with wisdom, justice and temperance (self-control), courage was considered essential to living a good life. Perhaps with the adoption of the Stoic philosophy by the Romans and its eventual influence on Christianity the virtue of courage became embedded in the Romantic languages such as French.
The Latin word for courage is “cor” which roughly translates to “heart”. When people say, “He had the heart of a Lion” they mean he had courage which was exemplary. More than courage, the person had “heart”. “Heart” often refers to the inner resolve and spirit of a person which courage is a part of. A person may have the courage to face a fight and enter a ring to face an adversary but “heart” keeps him in the fight even when the odds are stacked against him. The person is not being reckless or suicidal; the person has the self trust to carry on past any fears and doubts.
“Nihil tam acerbum est in quo non æquus animus solatium inveniat”
“There is nothing so disagreeable, that a patient mind can not find some solace for it”. – Seneca the Younger
The Latin word “Animus” was used to describe something more than “heart”. Animus roughly translated to the virtues of spirit, mind and courage. Animus entails the development of human mind, body and spirit and the transcendence of the human consciousness to higher levels.
Carl Jung believed that the masculine Animus and the feminine Anima are part of the collective unconscious in humans, transcending the personal psyche. Jung believed that humans evolved along a trajectory which culminates at transcendence, the expression of the rational soul. Seneca also described Animus to mean the rational soul expressed as the reasoned mind.
At the highest level Animus is the antithesis of the ego. The Ancient Greeks and Romans recognized that the ego was the greatest challenge that people faced. The root of all fears and doubts stem from the ego. The ego overrides reason and better judgement.
Cor (Heart) is needed to overcome that fear and arrive at a state of Animus which breaks us free from the grip of the ego. By finding Animus we overcome the barriers that we have built to stop us getting where we want to go.
“Courage is not the lack of fear. It is acting in spite of it” – Mark Twain
The Ancient Greeks and Romans considered Animus to be exemplified by the “warrior spirit” of duty, sacrifice, loyalty, honor and courage. When a warrior died in battle they had achieved the greatest feat for their nation. The Ancients believed that a warrior slain on the battlefield held an esteemed place in the underworld of the dead.
Even today we revere and honor our fallen heroes and use words such as courage, bravery and selflessness to describe them. Soldiers still use the slogan “Until Valhalla” in reference to the glory assigned to fighting with spirit and dying with honor. They are not fanatics, they trust themselves and their comrades beside them.
The purpose of the “Heroes Journey” is for the one “called to adventure” to find their internal Animus by overcoming the trials and challenges that stand before them. By venturing in to the dark and the unknown one arrives at light and knowledge. By sinking in to despair one finds hope. Through defeats and disappointments one finds the strength to overcome and the will to continue on to victory. The story has been told and retold through the myths and stories of the ages. We see it clearly in the saga of Star Wars. These stories inspire us.
“Bonus animus in mala re, dimidium est mali”
“Courage in danger is half the battle.” – Plautus
You do not need to be a hero on a life and death mission to discover your own Animus. I once thought the only way to truly test myself and find honor was by going to war. One does not need to do either to live a good and meaningful life. Life will test our courage and strength in many ways. It may be as simple as practicing principles even when others push the boundaries and provoke us. Staying sober is a daily and sometimes hourly test of resolve. We can express Animus in everything we do.
The French regularly say “Bonne Courage” as an offering of support to someone who is facing a challenge or difficult time. It is an odd expression to the English ear but it makes perfect sense. What the French are saying is much more than “Bonne Chance – Good Luck”. “Courage” is a reminder that everyone has an inner and sacred Animus that resides within. If one has the self trust to find heart and dig deep enough they will find it there and it will give them all the strength they need to prevail.
“Gratus animus est una virtus non solum maxima, sed etiam mater virtutum onmium reliquarum”
“A courageous heart is not only the greatest virtue, but the parent of all the other virtues.” – Marcus Tullius Cicero | <urn:uuid:aa819f15-e67c-4d69-807f-100188054dae> | {
"date": "2018-04-23T01:52:54",
"dump": "CC-MAIN-2018-17",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125945669.54/warc/CC-MAIN-20180423011954-20180423031954-00296.warc.gz",
"int_score": 4,
"language": "en",
"language_score": 0.9510094523429871,
"score": 3.5625,
"token_count": 1282,
"url": "http://www.dailyjedi.com/category/ego/"
} |
This essay Bansonyi has a total of 1052 words and 6 pages.
Sharon Matute October 24, 1999
Art 100 – 007 Professor Sax
Art can be used to study the progression of a civilization through time. Art is usually used to express one’s beliefs religiously, politically, and sometimes as a source of communication, which is accomplished through imagery. Symbols in works of art can be related to nature and myths.1 From the beginning of Chinese history, art and philosophy worked hand-in-hand with the creation of a work of art. Chinese art was used as evidence of a person’s behavior and attitude towards nature and other beings (e.g. the nicer the painting the better the person.)2
During the seventh and eighth centuries Chinese art was at its peak. China at this time was under the jurisdiction of the T’ang Dynasty. Because of the beautiful work being manufactured China became a multinational society. Paintings and sculptures were not the only works that China would receive admiration for. Their music and literature (poems which sometimes explained works of art) were also at their richest points,3
T’ang art has incomparable vigor, realism, dignity… There is an optimism, an energy, a frank acceptance of tangible reality which gives the same character to all T’ang art, whether it be the most splendid fresco
from the hand of a master or the humblest tomb figurine made by the village potter. (Sullivan 160)
When a piece of artistic work was considered good all that really mattered was the amount of effort that went into the piece and not the derivation of the person’s economic class. Scarce materials were used very often in the creation of Chinese artifacts.
One of the mot famous and revered stones used was Jade, which was very hard and indestructible. Jade cannot be found in China; it was traded with Burma, which is located on the outer edge of China, so it is amazing to know how much work was done with it in the 600 and 700 era. Jade was usually used in burials in the sealing of the orifices of the body. This mineral was also recognized for having a beautiful reverberating tone. Jade was carved by pulverizing it with the assistance of an abrasive powder, a skill that was modified from the Shang craftsmen from their Neolithic craftsmen.4 The fine work on the Emerald was done through the use of a wire saw for fine details. Then it is smoothed with a polishing wheel.5
In the process of working with Jade the artisan would have to form a respect induced relationship between self and the material. When the artist first receives the material he would not begin to carve because the contour, proportions, and decoration of the piece would depend on religious ceremony. Craftsmen would sometimes study a piece of Jade for many years before deciding what to do with it.
Jade comes in an array colors ranging from yellow to brown and from light green to bright green, black and dark purple and those of the highest value were white. Each color of Jade had a specific classification such as ink black, snow, kingfisher green, sea green, grass green, vermilion red and mutton-fat. Green stones in Chinese culture are deemed for having healing powers. That was my main reason for having such an interest in relics made from green minerals.6
The piece I chose to study is called the Nine Elders of the Huichang, Mountain Scene of the celebrated gathering in 845 C.E. The Jade used is green nephrite from Hotan. This piece sits in the Peking Palace Museum. It stands 4 ft. high, 3 ft. wide and weighs 1,830 pounds. This piece was completed in 1786 with the addition of a
poem engraved on the back of the figurine by the Qianlong emperor. The frontal view illustrates a scene of the first and second elders playing chess in the gazebo and the third elder observing. Below that a small servant boy is boiling water for tea. The fourth and fifth elders are conversing and strolling over the bridge, followed by another boy servant. The remaining four elders can be seen on the reverse side of the effigy. The sixth elder has his hand on a boy’s head and they are both absorbing the beauty of nature. The seventh senior is walking with the assistance of a bamboo stick
Topics Related to Bansonyi
Hardstone carving, Chinese culture, Jade, Inosilicates, Chinese art, Sculpture, Seal, Nephrite, Jade Collection in National Museum | <urn:uuid:9e18efae-eba1-4181-b257-113562be05d0> | {
"date": "2017-05-26T20:38:22",
"dump": "CC-MAIN-2017-22",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463608684.93/warc/CC-MAIN-20170526203511-20170526223511-00332.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9758540391921997,
"score": 3.203125,
"token_count": 953,
"url": "https://eduessays.com/essays/bansonyi"
} |
Back to the swarm. Dr. Tom Seeley says that to conserve energy, a swarm keeps its temperature too low for flight until the swarm is ready to take off. The scout bees grab hold of the low temperature bees and shake them to get them warmed up. I thought of this as I noticed the top of the swarm seemed to be "bubbling"—a lot of activity. As I trimmed around the swarm the activity increased, then the bees began sloughing off the sides and within a minute they were airborne.
Tanging: the beating on a pot or clanging a bell to induce a swarm to alight. The superstition persists, probably because it appears to work most of the time (especially when the swarm, emerging from a hive, is going to alight anyways). I had a pan handy, and no one around to see me make a fool of myself. So I tanged them out of sight. Good bye bees. | <urn:uuid:62b745c8-a8a0-4c42-bc72-087019d7ee91> | {
"date": "2017-12-18T10:49:28",
"dump": "CC-MAIN-2017-51",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948615810.90/warc/CC-MAIN-20171218102808-20171218124808-00656.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9726515412330627,
"score": 2.59375,
"token_count": 192,
"url": "http://petesieling.blogspot.com/2010/05/"
} |
|y = f (x) + k||up k units|
|y = f (x) - k||down k units|
|y = f (x + h)||left h units|
|y = f (x - h)||right h units|
|y = m·f (x)||stretch vertically by a factor of m|
|y = ·f (x)||shrink vertically by a factor of m (stretch by|
|y = f ( x)||stretch horizonally by a factor of n|
|y = f (nx)||shrink horizontally by a factor of n (stretch by )|
|y = - f (x)||reflect over x -axis (over line y = 0 )|
|y = f (- x)||reflect over y -axis (over line x = 0 )|
|x = f (y)||reflect over line y = x|
We can combine operations, as long as we pay attention to the order in which we alter inputs and outputs. Operations on outputs follow the order of operations, and operations on inputs follow the reverse order of operations (since we have to "undo" them). Thus, the equation of a function stretched vertically by a factor of 2 and then shifted 3 units up is y = 2f (x) + 3 , and the equation of a function stretched horizontally by a factor of 2 and then shifted 3 units right is y = f ((x - 3)) = f ( x - ) .
Example: f (x) = 2x 2 .
Take a Study Break! | <urn:uuid:c89b8709-24db-46e0-82ef-4f9e7d93aa1c> | {
"date": "2017-04-25T14:45:18",
"dump": "CC-MAIN-2017-17",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917120461.7/warc/CC-MAIN-20170423031200-00293-ip-10-145-167-34.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.8575940728187561,
"score": 3.453125,
"token_count": 342,
"url": "http://www.sparknotes.com/math/algebra2/operationsonfunctions/section4.rhtml"
} |
Complying With Gender-Based Equal Pay Laws
Federal law prohibits covered employers from basing pay differences solely on gender.
Under federal wage and hour law, employers subject to the Fair Labor Standards Act must comply with the Equal Pay Act. In order to be in compliance an employer cannot pay a person of one gender, who is doing the same or substantially the same work as another employee of the opposite gender, less money.
Therefore, in order to comply with equal pay requirements, you need to first determine if you're covered by equal pay laws.
The Equal Pay Act applies only to pay differences between men and women. It does not address pay inequities motivated by race, color, religion, or national origin. Pay inequities related to protected groups other than gender groups are covered by federal anti-discrimination law.
Are You Subject to Equal Pay Laws?
If you are subject to the Fair Labor Standards Act (FLSA), you are also subject to Equal Pay Act requirements (i.e., you must pay men and women the same where they have the same job duties and qualifications). Employees who are exempt from the FLSA are not subject to equal pay requirements, except in the case of executive, administrative, and professional employees and outside salespersons.
Employees not covered by equal pay laws. Employees who fall into one of the following categories are also exempt from equal pay requirements:
- employees of amusement or recreational establishments having seasonal peaks
- seamen on non-American vessels
- employees engaged in the fishing industry, including offshore seafood processing
- agricultural employees of an employer who did not use more than 500 man-days of agricultural labor in any quarter of the preceding calendar year
- agricultural employees who are members of the employer's immediate family
- hand-harvest laborers who are paid on a piece rate basis, commute daily from their permanent residences, and whose agricultural employment, if any, during the preceding calendar year was for less than 13 weeks
- hand-harvest laborers under 17 years of age who are employed at a piece rate on the same farm as their parents
- workers principally engaged in the range production of livestock, such as cowboys and shepherds
- employees of weekly, semiweekly, or daily newspapers of less than 4,000 circulation, the major part of which is in the county of publication or contiguous counties.
- switchboard operators employed by independently owned public telephone companies having not more than 750 stations.
- employees who are casual babysitters or companions to ill or aged persons unable to care for themselves
If you determine that you as an employer and/or your employees are covered under the Equal Pay Act, your next move should be to examine your current pay structure, and make sure that there are no potential violations.
Analyze Your Wages to Ensure Compliance With Equal Pay Requirements
If you're covered by the Equal Pay Act, analyzing your pay structure can help ensure you're in compliance with equal pay requirements.
If you only have a few employees and if no two people in your office do the same job, you probably won't need to address this issue at all. However, if you have people of different genders who do the same or substantially the same job, you should look at the pay they receive. If you spot differences in pay between men and women for the same work, you should make sure that you can prove that those differences are based on something other than gender.
If you have more than a couple of employees, how do you analyze your pay structure? One simple way is to:
- Average the pay (on a weekly, biweekly, monthly, or hourly basis) for all men and women in particular job class).
- Compare the salaries of the men and women in that job class. Do all the women fall below the average while all the men fall above it? Unless you have another explanation for those kinds of differences (such as seniority, education, experience, etc.), you may need to consider taking some steps to rectify the situation.
In analyzing your pay structure and making sure that you're in compliance, you have to know:
- what each job entails
- what working conditions the jobs are performed under
- what skill and effort is required to do the job
If you have job descriptions for your employees' positions, this is a perfect time to use them.
What kinds of things should you look for in your analysis? In analyzing your pay structure, look for instances where a female employee and a male employee do the same work and one employee's pay is much higher. Figure out why that's the case. Some common — and lawful — explanations could be:
- shift differentials
- quantity or quality of work
- additional job duties
- working conditions
- additional skills required
If you employ Jack and Jill as receptionists and if Jack doesn't have seniority over Jill because they were hired at the same time, you cannot pay Jack more because he is a male.
However, if Jack is paid more because he has extra job duties, such as ordering office supplies, or better qualifications, such as experience than Jill, you may have valid reasons for paying Jack more than Jill.
In your analysis, sometimes a problem will be simple to spot, such as when you pay Bob $14.25 per hour and you pay Brooke $11.25 per hour for the same work, and there are no other explanations for the disparity. However, there are other types of violations that aren't so easy to see.
For example, are there situations where males predominantly occupy a certain kind of job that pays more than other jobs? While this situation may say something about your recruiting and hiring practices, it may also lead to problems with Equal Pay Act claims.
You employ four sales managers and nine sales representatives. The managers get paid an average of $70,000 annually, and the sales reps get paid an average of $30,000 annually. All of your managers are males, while seven out of nine of your sales representatives are female.
While this situation in and of itself does not necessarily put you in violation of the Equal Pay Act, it should raise a red flag. If men are in most or all of your highest paying jobs while women are in most or all of your lower paying positions, you'll want to look into this problem and make sure that the differences in pay and in gender/job distribution are motivated by factors other than gender.
Correcting Pay Inequities
If you see a situation where there is clearly a problem with females being paid less than males for the same work, or vice versa, you need to fix the problem by making the wages more equitable.
It is illegal to reduce the pay of one gender to match the lower pay of the other. You have to raise the pay of the employee who is being paid less.
Fix the problem as soon as possible. Don't wait until the employee's next raise to bridge the salary gap.
If the problems are more subtle, your hiring and promotion procedures may be the problem. Be sure to give females the same opportunities to get the higher paying jobs as males.
Business Entity Compliance from CT Corporation — Partner with the Industry Leader
Contact your CT service representative now! | <urn:uuid:6b7576cb-0c71-478e-b441-6a051538511e> | {
"date": "2017-02-26T17:11:30",
"dump": "CC-MAIN-2017-09",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172018.95/warc/CC-MAIN-20170219104612-00504-ip-10-171-10-108.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9679854512214661,
"score": 2.765625,
"token_count": 1477,
"url": "http://www.bizfilings.com/toolkit/sbg/office-hr/managing-the-workplace/complying-with-gender-based-equal-pay-laws.aspx"
} |
Career as a Film Producer
There are many different jobs in film all varying from behind the scenes, to in front of the camera, setting up to taking down, writing to editing. Producing is one possible career an international student can choose to dive into. Producers are the glue of the production. They do just what they're name suggests, produce material, produce the movie, the play, the performance. To have a job as a producer, requires creativity and micromanagement.
What is a producer?
Having a job as a producer means making "the business and financial decisions involving a motion picture, television show, or stage production," according to the Bureau of Labor Statistics. As a producer, you're number one priority is to raise money for the production and then to make sure that the production will make money at the box office. As a producer, you will also select scripts, hire and work closely with the director and screenwriters, help choose the actors and talent, and create a budget and shooting schedule among many other duties. Sometimes, in bigger productions, a producer will have a line producer and assistant producer working under them as well as executive producers. A line producer handles day-to-day scheduling and budgeting and works with the director on set. Executive producers are the ones financing the production and the title is more honorary.
Having a career as a producer means irregular hours and unusual locations. Being a producer is a stressful career, making sure the production is on schedule, on budget and everyone from actors, directors, to the union are happy, therefore, the hours can be long. Weekends are a guarantee. Whatever the shooting schedule for the production is, the producer will be there long after it's wrapped up. Since shooting a production often happens over mere months, the work is at a more rapid pace and since it can only last for months, work is unsteady and not guaranteed. Many producers do other jobs on the side. Also, since producers are often where the shoot is, traveling to different locations and not just staying in a studio or sound stage is a possibility.
How to become a producer
International students who want a job as a producer might want to consider going to a university with a film or business program, such as Full Sail University. Since producers aren't just creative directors and are very involved in management and fundraising, business is a good major to study. Film study will also be beneficial in that you learn the workings of a production and the technology that accompanies it. Internships in the film business are a must in that many producers are not successful without experience. "Producers often start in a theatrical management office, working for a press agent, managing director, or business manager. Some start in a performing arts union or service organization. Others work behind the scenes with successful directors, serve on the boards of art companies, or promote their own projects," according to the Bureau of Labor Statistics.
"In May 2008, actors, producers, and directors held about 155,100 jobs, primarily in the motion picture and video, performing arts, and broadcast industries. This statistic does not capture large number of actors, producers, and directors who were available for work but were between jobs during the month in which data were collected. About 21 percent of actors, producers, and directors were self-employed," according the Bureau of Labor Statistics. Most of the jobs available in the film industry are located in New York City and Los Angeles. For the decade of 2008-2018, employment is expected to grow 11 percent, which is the average for other occupations.
"Median annual wages of producers and directors were $64,430 in 2008. The middle 50 percent earned between $41,890 and $105,070. Median annual wages were $85,940 in the motion picture and video industry and $55,380 in radio and television broadcasting," according to the Bureau of Labor Statistics. | <urn:uuid:46ca6097-6e0d-4da9-873a-e3051a18a3d9> | {
"date": "2015-10-09T12:16:02",
"dump": "CC-MAIN-2015-40",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-40/segments/1443737929054.69/warc/CC-MAIN-20151001221849-00084-ip-10-137-6-227.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9776157736778259,
"score": 2.53125,
"token_count": 795,
"url": "http://www.internationalstudent.com/study-film/career-as-a-film-producer/"
} |
Fur animal feed may spread new animal diseases
In 2015, the situation on Finnish production animal farms was good in terms of animal diseases, but African swine fever, among other diseases, poses a serious risk. The situation is continuously changing globally.
Evira’s risk profile examines the health risks to which cattle, pigs, sheep, goats and aquaculture plants are exposed.
“Organic matter made up of by-products, such as slaughter waste, can contain a variety of pathogens. It is possible for an animal disease to be imported in by-products due to human error somewhere in the long processing chain, either abroad or in Finland, or in the case of a breach of regulations,” says Leena Sahlström, Senior Researcher, DVM, PhD of Evira’s Risk Assessment Research Unit.
The consequences and financial implications of pathogens entering the country would depend on the disease in question and the production animals affected. In the worst-case, there could be financial implications for almost the entire animal production sector.
Risks are reduced when the rules are followed
Animal by-products are imported into Finland for the needs of various fields of production. If rules are adhered to, the disease propagation risk is eliminated. By contrast, even a minor processing error can lead to the spread of a disease.
“Pathogens may be spread by rodents, tools used on farms, wind, or slurry or waste water run-off from a neighbouring farm. Short distances between farms and by-product plants can contribute to the spread of pathogens,” says Sahlström.
Pathogens surviving in fur animal manure spread on fields can represent a significant risk to grazing animals on farms with both production and fur animals. Pathogens are most likely to enter Finland in raw materials for fur animal feed in the summer or early autumn, when the need for feed and the related imports of raw materials are at their peak.
Aquaculture plants are also at risk if they are located in an area affected by waste water or slurry run-off from a fur animal farm or by-product plant.
Further research is needed
There is a risk that non-processed by-products will begin to be imported into Finland.
“For this reason, risk assessment and studies of imported by-products must be continued. Further research is also necessary if large quantities of by-products begin to be used in products other than fur animal feed, such as fertilisers. Additionally, it would be beneficial to examine whether by-products pose a risk to poultry,” says Sahlström.
Hazards of importing category 2 animal by-products – a risk profile. Description is in English.
For further information, please contact:
Leena Sahlström, Senior Researcher, DVM, PhD, tel. +358 50 464 8051 | <urn:uuid:a55dd1eb-293f-42a3-8be3-3f6d684ada45> | {
"date": "2016-12-06T08:09:38",
"dump": "CC-MAIN-2016-50",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698541886.85/warc/CC-MAIN-20161202170901-00064-ip-10-31-129-80.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9359567165374756,
"score": 2.890625,
"token_count": 600,
"url": "https://www.evira.fi/en/animals/current_issues/2015/fur-animal-feed-may-spread-new-animal-diseases/"
} |
Saturated fat is a kind of fat. It contains no double bonds. It contains carbon atoms that are fully saturated with hydrogen. Saturated fats are usually solid at room temperature. They have no double bonds, while unsaturated fat has one or two.
Is saturated fat a risk factor for heart disease (CVD)? This is a question with many controversial views. Although most in the mainstream heart-health, government, and medical communities hold that saturated fat is a risk factor for CVD, some recent studies have produced conflicting results.
Health[change | change source]
Saturated fats are a kind of fat. For a long time scientists have believed that eating saturated fat was a leading cause for heart attack, cancer, or other diseases. However, new research have shown that there is no connection between how much saturated fat you eat and heart diseases. This is still a controversial question.
Things like butter, nuts, chocolate and meat have lots of saturated fat.
The compound[change | change source]
Saturated means that it holds all the hydrogen atoms that it can, meaning that all of the carbon (c) atoms have two hydrogen (H) atoms attached to it.
Related pages[change | change source]
Other websites[change | change source]
- Foods high in saturated fat.
References[change | change source]
- Siri-Tarino P.W. et al (2010). "Meta-analysis of prospective cohort studies evaluating the association of saturated fat with cardiovascular disease". The American Journal of Clinical Nutrition 91 (3): 535–46. . . .
- Kazumasa Yamagishi1 et al 2013. "Dietary intake of saturated fatty acids and incident stroke and coronary heart disease in Japanese communities: the JPHC Study". European Heart Journal 1225–1232. .
- R.S. Kuipers et al 2011. "Saturated fat, carbohydrates and cardiovascular disease". Netherlands Journal of Medicine. 69 (9) 372–377.. http://www.njmonline.nl/getpdf.php?t=a&id=10000756. | <urn:uuid:798f04f0-2196-45db-87ea-22d620dbb746> | {
"date": "2015-06-30T17:00:08",
"dump": "CC-MAIN-2015-27",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375094451.94/warc/CC-MAIN-20150627031814-00266-ip-10-179-60-89.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.8913809061050415,
"score": 3.109375,
"token_count": 436,
"url": "https://simple.wikipedia.org/wiki/Saturated_fat"
} |
What does CPI mean in General?
This page is about the meanings of the acronym/abbreviation/shorthand CPI in the Computing field in general and in the General terminology in particular.
Find a translation for CPI in other languages:
Select another language:
What does CPI mean?
- consumer price index, CPI, cost-of-living index(noun)
- an index of the cost of all goods and services to a typical consumer | <urn:uuid:3af62d45-0378-4aaa-8450-1b9c5f623a3d> | {
"date": "2015-10-08T22:29:37",
"dump": "CC-MAIN-2015-40",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-40/segments/1443737904854.54/warc/CC-MAIN-20151001221824-00116-ip-10-137-6-227.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.8821228742599487,
"score": 2.515625,
"token_count": 95,
"url": "http://www.abbreviations.com/term/79242"
} |
Innovation is one of those words that is as loaded as it is inescapable.
It appears constantly on billboards, TV commercials and political speeches. I’ll wager every big organization in the world lays claim to the concept through a mission statement or some other purported self-description. Our hopes for improved institutional outcomes–from schools, from hospitals, from governments–are all stoked by a devotion to the glimmering promise of doing things better in a new way.
What about digital preservation? Is innovation the key to dealing with all that valuable digital data?
This is, of course, an very unsatisfying answer. Innovation should be the answer to everything, most especially to all things digital.
“Never before in history has innovation offered promise of so much to so many in so short a time,” is a quote attributed Bill Gates, and at first glance it seems to ring with a self-evident truth.
When considered from the popular perspective of innovation, digital preservation looks like a straightforward challenge for libraries, archives, museums and other entities that long have kept information on behalf of society. All they need are some new ideas, practices and tools–all of which information technology excels in delivering. There’s also a neat symmetry here: technology created new kinds of information for libraries to preserve, so technology can help libraries do the job.
But it isn’t quite so easy. The basic problem is what Larry Downes has called “the laws of disruption,” of which the most fundamental is “technology changes exponentially, but social, economic and legal systems change incrementally.” Downes notes that innovative digital technology has thoroughly roiled many social conventions and that “nothing can stop the chaos that will follow.” An overly dramatic statement, yes, but it illustrates that innovation is not a safe, orderly or controllable process. It sends out big ripples of disruption with an unpredictable impact.
Consider the irony: organizations tout innovation as a way to thrive and prosper when the truth of the matter is that real innovation often destabilizes and destroys.
Libraries and other memory organizations are now bouncing on ripples of disruption, and the ride likely will stay scary for the foreseeable future. Innovation puts these institutions in a bind: they are now confronted with a huge array of demands and choices that traditional structures are ill-suited to address. They face an irresistible need for change. But the further they stick their toes into the waves of innovation, the greater the potential for even more destabilization. And since most institutions strongly resist that which threatens their stability, they have an unmovable incentive to resist real change. All this means that the ability of traditional institutions to fully meet the need for digital preservation is in doubt.
Well, that’s depressing. Wait, though–there’s a another side to innovation that offers hope for meeting the digital preservation challenge. Many individual librarians and archivists are using new kinds of tools and services–such as LOCKSS and “micro-services“–to build local preservation solutions.
Even more significantly, individuals of all kinds are playing a role in determining what gets saved and how that content is used. Consider the impact that one person–Brewster Kahle–has made over the years through the Internet Archive. Jason Scott is getting high-profile attention for his grassroots work to preserve large volumes of web content abandoned by companies such as Yahoo!. All kinds of average people are developing interest in personal digital archiving to preserve their family memories.
Tim O’Reilly, the visionary who first saw the development known as web 2.0, sees a major role for individuals in digital preservation. Here’s a summary from an account of his talk at a recent Library of Congress meeting:
O’Reilly stressed the preservation role of people working outside of institutions. He called for “baking in” more preservation functionality into tools used to create and distribute digital content to enable a more distributed stewardship mindset. This is important because “the things that turn out to be historic are not thought to be historic at the time.” O’Reilly also said one of the most tweetable bits at the meeting: “Digital preservation won’t be just the concern of specialists, it will be the concern of everyone.”
I have some sympathy with O’Reilly’s argument. It builds on the powerful trend of individuals asserting control over how information is published, distributed and used. The result of a broad-based popular effort to steward digital data would also address some fundamental preservation needs: lots of distributed copies that are open for active use. Individuals also often can adapt to change with more flexibility than can institutions.
Ultimately, we have to hope that innovation pushes along the trend toward the democratization of digital preservation. The more people who care about saving digital content, and the easier it is for them to save it, the more likely it is that bits will be preserved and kept available. | <urn:uuid:acbfb5f7-64db-44b3-8515-c28d2aeb82ba> | {
"date": "2013-05-25T05:45:09",
"dump": "CC-MAIN-2013-20",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705559639/warc/CC-MAIN-20130516115919-00000-ip-10-60-113-184.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9496615529060364,
"score": 2.515625,
"token_count": 1034,
"url": "http://agogified.com/932"
} |
Different cities are taking different approaches to reducing both pollution and car usage. While Londoners are charged a fee for entering the central district of the city, certain German cities are now banning the most pulluting cars from downtown areas. Next time you’re considering driving your black smoke-spewing clunker into central Berlin, Cologne, or Hanover, think again!
Cars in the three cities are now required to wear a colored pollution index (green, yellow, or red) to show the vehicle’s pollution levels. The fourth and worst option, vehicles whose pollution levels are worse than the red label, are the ones which are no longer allowed to enter the city center. These vehicles are an estimated 1.7 million vehicles. Those vehicles caught entering the city will face a fine.
The reform “is the most serious attempt until now to get to grips with the most serious source of air pollution, which causes 75,000 premature deaths per year,” said German green group Deutsche Umwelthilfe. Anything that will get people to use public transportation, and that ensures that the worst polluting vehicles are slowly taken off the roads is fine by us. 20 more cities in Germany will follow suit this year. | <urn:uuid:3e4a8d75-30d2-4db4-81c8-f3aba7375b70> | {
"date": "2014-07-22T23:52:08",
"dump": "CC-MAIN-2014-23",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997869778.45/warc/CC-MAIN-20140722025749-00192-ip-10-33-131-23.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.950416088104248,
"score": 2.859375,
"token_count": 252,
"url": "http://inhabitat.com/germany-bans-polluting-cars-from-city-centre/"
} |
Puerto Rico is an island in the Caribbean Sea and is the fourth largest island in what is known as the Greater Antilles. It is an unincorporated territory of the USA and is close to both the British and United States Virgin Islands and has many smaller islands of its own, with Mona among its largest.
Puerto Rico also has a tropical climate and is affected by hurricanes. It is one of three countries in the Caribbean with Spanish as its official language. The island is mainly mountainous with the main mountain range in the centre of the country. There are coasts on the north and south while there are also several man-made lakes and more than 50 rivers.
The Puerto Rican culture relies heavily on music… singers such as Jennifer Lopez, Marc Anthony, and Ricky Martin all hail from Puerto Rico.
There is a large road network with the major towns and cities connected by freeways and expressways much like in the United States. The main areas are served by a public bus and there are also mini buses. There are three international airports in Puerto Rico 27 smaller ones. | <urn:uuid:480baabc-9539-4ab5-8112-a394ae05cf29> | {
"date": "2018-04-20T22:26:45",
"dump": "CC-MAIN-2018-17",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125944742.25/warc/CC-MAIN-20180420213743-20180420233743-00576.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9813467264175415,
"score": 2.640625,
"token_count": 220,
"url": "https://blog.circleme.com/2013/08/28/puerto-rico-rich-port/"
} |
How Coke Helped Create Pepsi, and Other Historic Market Moments
On this day in economic and business history ...
The stories of Coca-Cola and PepsiCo have a lot more in common than brown, fizzy sugar water. Coke was invented by a Southern pharmacist in the late 1800s. So was Pepsi. The family of Coke's inventor lost control of the formula shortly after his death. The family of Pepsi's inventor lost control of the formula shortly before his death. Coke was pushed to popularity by a marketing-savvy businessman. Pepsi was, too -- following its post-bankruptcy acquisition by candy maker Charles Guth, who incorporated Pepsi-Cola, progenitor of the modern PepsiCo, on Aug. 10, 1931.
Pepsi inventor Caleb Bradham was not a business mastermind, and his Pepsi-Cola went into bankruptcy in 1923. Pepsi's second owner also failed to turn Pepsi into a proper rival for Coke (by then entering a period of phenomenal growth thanks to the marketing genius of president Robert Woodruff), and this Pepsi-Cola also fell into bankruptcy in 1931.
Guth leapt at the opportunity to buy Pepsi's assets for just $10,500 (about $160,000 today), but not simply because he saw a great business opportunity. Guth also happened to be president of Loft, a major candy manufacturer that operated soda fountains in its stores. Coke had provided the cola syrup of choice at Loft's stores, until it refused to give Guth the wholesale discount he wanted. A cost-conscious Guth began to explore for alternatives and soon found Pepsi, which at that time was rapidly approaching bankruptcy. Guth bought Pepsi's assets with his own personal funds and established a new Pepsi-Cola shortly afterwards.
Within two years, Guth's Pepsi had become a milliondollar-profit machine. By 1936, Pepsi was selling 500 million bottles of cola a year and had become the second-largest soda company, behind only Coca-Cola, which had been indirectly responsible for Pepsi's resurgence in the first place. That's about when Guth ran into trouble with his other company over his Pepsi purchase.
The Loft candy company filed Guth v. Loft on behalf of its shareholders in 1935. The candy company's lawyers alleged that Guth breached his fiduciary duty to Loft when he failed to offer it the opportunity to purchase Pepsi's assets. The company also pointed to Guth's frequent use of Loft resources to help build Pepsi as a breach of fiduciary responsibility. The Delaware Supreme Court ultimately ruled in Loft's favor in 1939, creating the "Guth rule" in American corporate law, which prevents corporate representatives from taking on personal business opportunities that would have been within the scope of their corporation's means and expertise.
Loft absorbed Pepsi's operations and spun off its non-soft-drink businesses in 1941, and this restructured Pepsi-Cola grew briskly during and after World War II. In 1965, Pepsi-Cola merged with Frito-Lay to become PepsiCo, which set the soft-drink maker on the path to finally eclipse Coca-Cola's sales in the 1980s.
Despite Pepsi's seizing of the sales lead over its soft-drink rival, the Dow Jones Industrial Average passed it over by choosing Coke as a component in 1987. Over the long run, that's held the Dow back -- in addition to boasting a higher share price (which would have a larger impact on the price-weighted Dow), Pepsi has also given shareholders a better return. In the two decades following Coke's addition to the Dow, Pepsi outperformed Coke by a wide margin, posting 1,450% total gains to Coke's total growth of 1,000%.
The birth of rock n' roll
Where would modern music be without the electric guitar? We haven't had to wonder about that since Aug. 10, 1937, when G. D. Beauchamp obtained the first patent ever awarded for an electric guitar. It was called the Rickenbacker Frying Pan, and History (the channel) has the background:
Beauchamp, partner with Adolph Rickenbacher in the Electro String Instrument Corporation of Los Angeles, Calif., spent more than five years pursuing his patent on the Frying Pan. It was a process delayed by several areas of concern, including the electric guitar's reliance on an engineering innovation that dated to the 19th century. When a vibrating string is placed within a magnetic field, it is possible to "pick up" the sound waves created by that string's vibrations and convert those waves into electric current. Replace the word "string" with the word "membrane" in that sentence, however, and you also have a description of how a telephone works. For this reason, Beauchamp's patent application had to be revised multiple times to clarify which of his individual claims were truly novel and which were merely new applications of existing patents.
On August 10, 1937, the Patent Office approved the majority of Beachamp's claims -- primarily those relating to the unique design of the Frying Pan's "pickup," a heavy electromagnet that surrounded the base of the steel strings like a bracelet rather than sitting below them as on a modern electric guitar. Unfortunately for the Electro String Corporation, Beauchamp's specific invention had long since been obsolesced by the innovations of various competitors, rendering the patent awarded on this day in 1937 an item of greater historical importance than economic value.
Invest in good works as well as good gains
The Pax World Fund launched with $101,000 in assets on Aug. 10, 1971. It was the first investment fund with a socially responsible mission attached to its fiduciary responsibility. Its two founders, United Methodist Church employees Jack Corbett and Luther Tyson, had worked on the idea since 1967, when the two were asked to find war-free investment opportunities by an Ohio parishioner.
Since its establishment, the Pax World Investments firm has grown to control approximately $3 billion in assets under management. The company's flagship Pax World Fund, now known as the Pax World Balanced Fund , has recorded an average annual return of about 8.4% since its creation in 1971. Investing sustainably can produce some pretty substantial gains.
Wall Street has been getting rich on trading floor tips for decades -- and for decades, those tips have been "for industry insiders only." But not anymore. Our top technology analyst recently infiltrated one of the finance world's most exclusive gatherings and left with three incredible investment opportunities, straight from the CEOs. These are profit-building strategies Main Street isn't meant to hear about, so act now before someone shuts us up. Click if you want "industry insider" earnings -- now!
The article How Coke Helped Create Pepsi, and Other Historic Market Moments originally appeared on Fool.com.Fool contributor Alex Planes holds no financial position in any company mentioned here. Add him on Google+ or follow him on Twitter, @TMFBiggles, for more insight into markets, history, and technology.The Motley Fool recommends Coca-Cola and PepsiCo and owns shares of PepsiCo. Try any of our Foolish newsletter services free for 30 days. We Fools don't all hold the same opinions, but we all believe that considering a diverse range of insights makes us better investors. The Motley Fool has a disclosure policy.
Copyright © 1995 - 2013 The Motley Fool, LLC. All rights reserved. The Motley Fool has a disclosure policy. | <urn:uuid:64282930-aa62-48ed-ab4d-568fdb7d2d45> | {
"date": "2018-08-20T20:21:35",
"dump": "CC-MAIN-2018-34",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221217006.78/warc/CC-MAIN-20180820195652-20180820215652-00216.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9729371666908264,
"score": 3.265625,
"token_count": 1521,
"url": "https://www.aol.com/article/finance/2013/08/10/the-little-known-story-of-how-coke-helped-create-p/20691844/"
} |
Issues in Program Design
|Contributor: World Bank
Author: Decentralization Thematic Team
Contact: Jennie Litvack
Education and Decentralization
There is currently a global trend of decentralizing education systems. Most countries are experimenting with or contemplating some form of education decentralization. The process transfers decision-making powers from central Ministries of Education to intermediate governments, local governments, communities, and schools. The extent of the transfer varies, however, from administrative deconcentration to much broader transfer of financial control to the regional or local level. While there are solid theoretical justifications for decentralizing education systems, the process requires strong political commitment and leadership in order to succeed. The path, depth, and ultimately, the outcome of decentralization reforms depend on the motivations for reforms, the initial country and sector conditions, and the interaction of various important coalitions within the sector.
Why Decentralize EducationIn a world where most governments have experienced the pitfalls of centralized education service provision, mainly: opaque decision-making, administrative and fiscal inefficiency, and poor quality and access to services, the theoretical advantages of decentralization have become extremely appealing. In general, the process of decentralization can substantially improve efficiency, transparency, accountability, and responsiveness of service provision compared with centralized systems. Decentralized education provision promises to be more efficient, better reflect local priorities, encourage participation, and, eventually, improve coverage and quality. In particular, governments with severe fiscal constraints are enticed by the potential of decentralization to increase efficiency. Beneficiary cost recovery schemes such as community financing have emerged as means for central governments to off-load some of the fiscal burden of education service provision.
Deciding Who Controls WhatThere is ongoing debate about the appropriate locus of decision making within the education sector. The debate remains unresolved because the process requires that policy makers rationalize and harmonize a complex set of complementary functions, mainly: curriculum design, teaching methods, student evaluation, textbook production and distribution, teacher recruitment and pay, school construction and rehabilitation, education financing, and parent-teacher linkages. The choices of who does what are further complicated because each of these functions has to be evaluated for primary, secondary, and tertiary education, and often for preschools and adult literacy as well. Some emerging areas of consensus are summarized in this table.
The evidence about the impact of decentralization on education services is mixed and limited. In Brazil, it has increased overall access (enrollments) but has done little to reverse persistent regional inequities in access to schooling, per capita expenditures, and quality. Chile's experience also suggests that decentralization does not by itself remove inequalities between localities of varying incomes, and quality in poorer communities continues to lag. These results are supported by experiences in Zimbabwe and New Zealand. However, the design of these decentralized systems have been criticized. One shortcoming is that central governments have off-loaded responsibilities to local governments and communities without providing adequate targeted support to poorer areas.
Decentralization of education systems demands harmonization of a complex set of functions, each for primary, secondary, tertiary, and non-formal education. Issues of how far to devolve decision-making in each of these subsectors, and to whom, continue to be debated. there are a number of on-going experiments worldwide, ranging from devolution of limited functions to intermediate governments and local governments, to community-based management and financing of schools. The current consensus is that tertiary education, and specific functions such as curriculum design and standards setting are best retained by the center; secondary and primary education should be devolved as far as possible; local participation in school management improves accountability and responsiveness, and fosters resource mobilization. Yet, the devil is in the details, and there are many details that need to be sorted out on a country by country basis. | <urn:uuid:56a8de7a-7db3-447d-a816-300af74ca285> | {
"date": "2014-11-23T10:43:55",
"dump": "CC-MAIN-2014-49",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-49/segments/1416400379462.60/warc/CC-MAIN-20141119123259-00044-ip-10-235-23-156.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9416564106941223,
"score": 3.203125,
"token_count": 788,
"url": "http://www.ciesin.columbia.edu/decentralization/English/Issues/Education.html"
} |
New brain imaging research from the Monash Institute of Cognitive and Clinical Neurosciences (MICCN) suggests that some people experience mental distress when faced with the prospect of disagreeing with others. The findings, published in the journal Frontiers in Human Neuroscience, reveal that some individuals choose to agree most of the time with others to spare themselves feelings of discomfort.
The study gives new insights into how the brain handles disagreement, with implications for understanding social conformity.
Using functional magnetic resonance imaging, the Melbourne based research team lead by senior author Dr Pascal Molenberghs and first authors Dr Juan Dominguez and Sreyneth Taing, investigated which brain areas are involved when people disagree with others. They found that people who rarely disagreed showed lots of activation in the medial prefrontal cortex, and anterior insula when they disagreed. These areas have been previously implicated in cognitive dissonance, a heightened state of mental stress.
According to Dr Domínguez, their findings provide insight into why some people find it hard to disagree with others. “People like to agree with others, a social default known as the truth bias, which is helpful in forming and maintaining social relationships. People don’t like to say that others are not telling the truth or lying because this creates an uncomfortable situation,” he added.
So, if you like to avoid mental distress when arguing with your partner, it is better to agree with them.
However, the research team also argues that a reduced inclination for individuals to disagree with others may have adverse effects as people may feel compelled to conform, potentially against their own interests.
The authors suggest an aversion to disagree has real life implications including poor decision-making, anxiety, or interpersonal relationship problems. A better understanding of the brain mechanisms of disagreement is therefore of great relevance in devising ways for helping people assert their independence.
Source: Monash University
Image Source: The image is adapted from the Monash University press release.
Original Research: Full open access research for “Why Do Some Find it Hard to Disagree? An fMRI Study” by Juan F. Domínguez D, Sreyneth A. Taing and Pascal Molenberghs in Frontiers in Human Neuroscience. Published online January 29 2016 doi:10.3389/fnhum.2015.00718
Why Do Some Find it Hard to Disagree? An fMRI Study
People often find it hard to disagree with others, but how this disposition varies across individuals or how it is influenced by social factors like other people’s level of expertise remains little understood. Using functional magnetic resonance imaging (fMRI), we found that activity across a network of brain areas [comprising posterior medial frontal cortex (pMFC), anterior insula (AI), inferior frontal gyrus (IFG), lateral orbitofrontal cortex, and angular gyrus] was modulated by individual differences in the frequency with which participants actively disagreed with statements made by others. Specifically, participants who disagreed less frequently exhibited greater brain activation in these areas when they actually disagreed. Given the role of this network in cognitive dissonance, our results suggest that some participants had more trouble disagreeing due to a heightened cognitive dissonance response. Contrary to expectation, the level of expertise (high or low) had no effect on behavior or brain activity.
“A Functionally Conserved Gene Regulatory Network Module Governing Olfactory Neuron Diversity” by Qingyun Li, Scott Barish, Sumie Okuwa, Abigail Maciejewski, Alicia T. Brandt, Dominik Reinhold, Corbin D. Jones, and Pelin Cayirlioglu Volkan in PLOS Genetics. Published online January 14 2016 doi:10.1371/journal.pgen.1005780 | <urn:uuid:00d53c76-895b-4533-b0b5-2e1634364e0a> | {
"date": "2017-11-20T11:44:08",
"dump": "CC-MAIN-2017-47",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934806030.27/warc/CC-MAIN-20171120111550-20171120131550-00056.warc.gz",
"int_score": 4,
"language": "en",
"language_score": 0.9299260377883911,
"score": 3.578125,
"token_count": 776,
"url": "http://neurosciencenews.com/social-conformity-anxiety-psychology-3605/"
} |
Paul Hyndman captured this stunning view of Venus crossing the face of the sun in hydrogen-alpha light on the morning of June 8, 2004 from Roxbury, Connecticut. He used an Astro-Physics 105-millimeter Traveler telescope fitted with a Coronado Solarmax90/T-Max and 30-mm blocking filter, a TeleVue 2X Powermate lens, and an SBIG STL-11000M CCD camera.
Credit: Paul Hyndman
Today's historic Venus transit is a marathon event lasting nearly seven hours, but skywatchers who don't have that kind of time can break it down into a handful of key milestones.
Venus treks across the sun's face from Earth's perspective today (June 5; June 6 in much of the Eastern Hemisphere), marking the last such Venus transit until 2117. Few people alive today will be around to see the next transit, which makes the rare celestial sight a premier event in the astronomical and skywatching communities.
The Venus-sun show will begin around 6 p.m. EDT (2200 GMT) and end at roughly 12:50 a.m. EDT (0450 GMT) Wednesday, with the exact timing varying by a few minutes from point to point around the globe.
Before you even attempt to observe the transit of Venus, a warning: NEVER stare at the sun through binoculars or small telescopes or with the unaided eye without the proper safety equipment. Doing so can result in serious and permanent eye damage, including blindness.
Astronomers use special solar filters on telescopes to view the sun safely, while No. 14 welder's glass and eclipse glasses can be used to observe the sun directly. [How to Safely Photograph the Venus Transit]
With that warning stated, here's a look at the first major stage of the transit of Venus.
The transit officially commences when the leading edge of Venus first touches the solar disk, an event astronomers call "Contact I" or "ingress exterior." This milestone occurs at 6:03 p.m. EDT (2203 GMT) for observers in eastern North America, while skywatchers on the other side of the continent will see it a few minutes later, at 3:06 p.m. PDT.
Second contact and beyond
Next up is "Contact II," or "ingress interior" — the moment when Venus moves fully onto the sun's face. This will happen 18 minutes after Contact I. [Venus Transit of 2004: 51 Amazing Photos]
If you're viewing the transit through a good telescope, you may see a dark teardrop form, briefly joining Venus' trailing edge and the solar disk just before Contact II. This so-called "black-drop effect" bedeviled efforts in 1761 and 1769 to measure the Earth-sun distance by precisely timing Venus transits from many spots around the globe.
Scientists once thought the black-drop effect was caused primarily by Venus' thick atmosphere, or by viewing through Earth's ample air. But astronomers also observed it in images of a Mercury transit snapped by a NASA spacecraft in 1999. Mercury has an extremely tenuous atmosphere, so the prevailing wisdom had to go.
"Our analysis showed that two effects could fully explain the black drop as seen from space: the inherent blurriness of the image caused by the finite size of the telescope, and an extreme dimming of the sun’s surface just inside its apparent outer edge," Jay Pasachoff of Williams College, who helped analyze the 1999 Mercury transit pictures, wrote last month in the journal Nature.
After Contact II, Venus continues its long, slow and slanting trek across the sun's face. The next major milestone comes at roughly 6:25 p.m. PDT (0125 GMT on Wednesday; switching to Pacific time now, as the sun will have set in eastern North America), when Venus reaches the exact center of its transit path — a point known as "Greatest Transit."
Earth's so-called sister planet will keep traveling across the solar disk for another three hours or so. The beginning of the end for the transit comes at about 9:30 p.m. PDT (0430 GMT Wednesday) with "Contact III," when Venus' leading edge touches the boundary of the solar disk.
Contact III, also known as "egress interior," represents the last moment when Venus is still entirely contained on the sun's face, and it offers another chance to witness the black-drop effect. The last-in-a-lifetime show ends 18 minutes later with "Contact IV," or "egress exterior," when Venus finally moves off the solar disk.
Where and how to watch
As the times of these various events indicate, much of the world won't be able to observe the whole transit.
In most of North America, for example, the sun will set before the celestial festivities end, while much of Europe will witness only the last stages of the transit as the sun is coming up. Large portions of South America and Africa will miss out entirely.
However, some regions of the globe will be treated to the entire spectacle. These include eastern Asia, eastern Australia, New Zealand and the western Pacific, as well as Alaska, northern Canada and almost all of Greenland.
To safely observe the Venus transit, you can buy special solar filters to fit over your equipment, or No. 14 welder's glass to wear over your eyes, as outlined above.
The safest and simplest technique, however, is probably to observe the transit indirectly using the solar projection method. Use your telescope or one side of your binoculars to project a magnified image of the sun’s disk onto a shaded white piece of cardboard.
The projected image on the cardboard will be safe to look at and photograph. But be sure to cover the telescope’s finder scope or the unused half of the binoculars, and don’t let anyone look through them.
And if weather or sunrise/sunset times conspire against you, you can always watch the Venus transit online. Multiple organizations will be broadcasting live footage of the event from a variety of locations around the world, some of which are bound to have clear skies.
Venus transits occur in pairs eight years apart, but these dual events happen on average less than once per century. The most recent transit occurred in 2004; before that, the last ones took place in 1874 and 1882.
Editor's note: If you take any great photos of the Venus transit and would like them to be considered for use in a story or image gallery, send them to SPACE.com Managing Editor Tariq Malik at [email protected]. | <urn:uuid:aa316143-dad7-4ab9-85ea-0812fb16c64f> | {
"date": "2014-09-01T07:30:53",
"dump": "CC-MAIN-2014-35",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1409535917463.1/warc/CC-MAIN-20140901014517-00412-ip-10-180-136-8.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9296333193778992,
"score": 3.25,
"token_count": 1379,
"url": "http://www.space.com/16007-venus-transit-stages-skywatching.html"
} |
Glimepiride should be used with caution in:
- People with decreased kidney function
- People with decreased liver function
- People in stressful situations such as injury, operation or infection with fever
- People lacking an enzyme in the blood known as glucose-6-phosphate dehydrogenase (G6PD)
- People with thyroid problems
- People with adrenal gland problems
- People with pituitary gland problems
- People with a poor diet or suffering from malnutrition
- People drinking alcohol, particularly if a meal is skipped
- People taking certain medications to treat high blood pressure, infections and other conditions (see interactions)
This medication may lead to low blood sugar levels when meals are taken at irregular hours or if you take more exercise than normal. The symptoms of low blood sugar include headache, excessive hunger, sweating, nausea, vomiting, tiredness, sleepiness, feeling restless, feeling aggressive, difficulty in concentrating, tremor, confusion, speech and sight problems. It is important to take some form of sugar as soon as possible to prevent your blood sugar level dropping any further.
It should not be used in:
- Children (under 18 years)
- People with an allergy to glimepiride or any of the ingredients in the medicine
- People with an allergy to other sulfonylureas (such as glibenclamide, glipizide)
- People with an allergy to sulfonamides (to treat infections such as sulfamethoxazole)
- People with insulin dependent diabetes (type 1 diabetes)
- People with severe liver problems
- People with severe kidney problems
- People in a diabetic coma
- People with porphyria (inherited blood disorder)
- People with diabetic ketoacidosis (complication of diabetes with body producing ketone bodies)
- Pregnant women
- Women breastfeeding
- Rare hereditary problems of galactose intolerance, the Lapp lactase deficiency or glucose-galactose malabsorption (as the tablets contain lactose).
Also see list of precautions and interactions
Do not store above 30°C.
What is it used for?
- Glimepiride is used to treat type 2 diabetes mellitus.
- It is a sulfonylurea, sometimes known as an oral hypoglycaemic.
- It is used to control blood sugar levels in people with type 2 diabetes. People with type 2 diabetes do not normally require insulin (non-insulin dependent) and glimepiride is used when diet, exercise and weight loss have not sufficiently controlled the blood sugar levels. Glimepiride mainly works by stimulating the release of insulin from the pancreas. The insulin then lowers the blood sugar levels.
- In general this drug is used treat type 2 diabetes in adults by controlling the blood sugar levels.
- Benefits of being on this drug can include better control of blood sugar levels thus reducing the risk of complications.
Listed below are the typical uses of glimepiride.
- Type 2 (non-insulin dependent) diabetes, when diet, exercise and weight loss alone are not sufficient to control blood sugar levels.
- On occasion your doctor may prescribe this medicine to treat a condition not on the above list. Such conditions are listed below.
- None known.
HOW TO USE/TAKE
How often do I take it?
- Take this medication by mouth usually once daily, before or with breakfast or the first main meal of the day. Swallow the tablets whole with some water or other suitable liquid. It is important not to leave out any meal when you are on glimepiride.
- Use this medication regularly in order to get the most benefit from it.
- Remember to use it at the same time each day - unless specifically told otherwise by your doctor.
- It may take up to a few hours before the full benefit of this drug takes effect.
- Certain medical conditions may require different dosage instructions as directed by your doctor.
- Dosage is based on your age, gender, medical condition, response to therapy, and use of certain interacting medicines. The dose of glimepiride taken is determined by the blood and urinary glucose levels.
Do I need to avoid anything?
- If you have low or high blood sugar levels, or you have visual problems, then avoid driving or operating machinery. It is important that take extra care to avoid low blood sugar levels when driving and that you pay attention to the warning signs of low blood sugar. Consult your doctor or pharmacist for more details.
When can I stop?
- It is important to continue taking this medication even if you feel well, unless your doctor tells you to stop.
GLIMEPIRIDE SIDE EFFECTS
- Temporary visual disturbances (usually at start of treatment)
- Blood abnormalities
- Decreases in white blood cell count (more likely to get an infection)
- Decreases in platelet count (more likely to bruise and bleed)
- Decreases in red blood cell count (look pale and feel tired)
- Low blood sugar (hypoglycaemia)
- Increases in liver enzymes
- Stomach ache
- Allergic skin reactions such as rash, itching, hives or increased sensitivity to sunlight
- Abnormal liver function
- Inflammation of the liver (hepatitis)
- Yellowing of the skin and eyes (jaundice)
- Bile flow problems (cholestasis)
- Liver failure
- Decreases in blood sodium level
- Very low blood sugar levels (severe hypoglycaemia) with fits, loss of consciousness or coma
If any of these persist or you consider them severe then inform doctor or pharmacist.
Glimepiride can lead to low blood sugar (hypoglycaemia). The symptoms of low blood sugar include headache, excessive hunger, nausea, vomiting, tiredness, sleepiness, feeling restless, feeling aggressive, difficulty in concentrating, tremor, confusion, speech and sight problems.
Tell your doctor immediately if you develop any of the following symptoms:
- Yellowing of eyes and skin (signs of liver problems)
- Skin rash, itching hives or increased sensitivity to sunlight as mild allergic reactions can progress into more serious allergic reactions
- Severely low blood sugar with fits, loss of consciousness or coma
Remember that your doctor has prescribed this medication because he or she has judged that the benefit to you is greater than the risk of side effects. Many people using this medication do not have serious side effects.
A serious allergic reaction to this drug is unlikely, but seek immediate medical attention if it occurs. Symptoms of a serious allergic reaction include: rash, itching/swelling (especially of the face/tongue/throat), dizziness, trouble breathing.
This is not a complete list of possible side effects. If you notice other effects not listed above, contact your doctor or pharmacist.
The Yellow Card Scheme allows you to report suspected side effects from any type of medicine (which includes vaccines, herbals and over the counter medicines) that you are taking. It is run by the medicines safety watchdog called the Medicines and Healthcare products Regulatory agency (MHRA). Please report any suspected side effect on the Yellow Card Scheme website.
Before taking glimepiride, tell your doctor or pharmacist if you are allergic to it; or to other sulfonylureas; or if you have any other allergies.
This medication should not be used if you have certain medical conditions. Before using this medicine, consult your doctor or pharmacist if you have:
- Allergy to glimepiride, other sulfonylureas (such as glibenclamide, glipizide), sulfonamides (to treat infections such as sulfamethoxazole) or any of the ingredients in the medication
- Severe liver problems
- Severe kidney problems
- Diabetic coma
- Porphyria (inherited blood disorder)
- Diabetic ketoacidosis (complication of diabetes with body producing ketone bodies)
- Rare hereditary problems of galactose intolerance, the Lapp lactase deficiency or glucose-galactose malabsorption (as the tablets contain lactose)
Before using this medication, tell your doctor or pharmacist your medical history, especially any of the following:
- Decreased kidney function
- Decreased liver function
- Hormonal problems such as thyroid, adrenal gland or pituitary gland problems
- Ongoing stressful situations such as injury, operation or infection with fever
- Glucose-6-phosphate dehydrogenase (G6PD) deficiency (an enzyme in the blood)
- Poor diet or malnutrition
Before having surgery, tell your doctor or dentist that you are taking this medication.
Does alcohol intake affect this drug?
- Alcohol can increase or decrease the blood sugar lowering effect of glimepiride in an unpredictable way.
The elderly: glimepiride should be used with caution in the elderly as it may increase the risk of hypoglycaemia (low blood sugar levels).
Pregnancy and breastfeeding - please ensure you read the detailed information below
Glimepiride is not safe to take if you are, or are planning to become, pregnant. During pregnancy, it is preferred to control diabetes through the use of insulin.
It is sensible to limit use of medication during pregnancy whenever possible. However, your doctor may decide that the benefits outweigh the risks in individual circumstances and after a careful assessment of your specific health situation.
If you have any doubts or concerns you are advised to discuss the medicine with your doctor or pharmacist.
It is not known whether glimepiride passes into breast milk. The manufacturer therefore states that it should not be taken if you are breastfeeding.
It is sensible to limit use of medication during breastfeeding whenever possible. However, your doctor may decide that the benefits outweigh the risks in individual circumstances and after a careful assessment of your specific health situation.
If you have any doubts or concerns you are advised to discuss the medicine with your doctor or pharmacist.
Your doctor or pharmacist may already be aware of any possible drug interactions and may be monitoring you for them. Do not start, stop, or change the dosage of any medicine before checking with them first.
This drug should not be used with the following medications because very serious, possibly fatal interactions may occur: None known.
Before using this medication, tell your doctor or pharmacist of all prescription and non-prescription/herbal products you may use, especially of:
Medications taken with glimepiride which increase the risk of low blood sugar (hypoglycaemia). These include the following:
- Other medications to treat diabetes such as insulin or metformin
- Anabolic steroids and male sex hormones such as testosterone
- Coumarins to stop blood clotting such as warfarin
- Fibrates used to treat for high cholesterol such as clofibrate
- Medications to treat high blood pressure such as
-ACE inhibitors e.g. captopril
-Alpha blockers, also used to treat enlarged prostate gland e.g. clonidine, doxazosin and prazosin
- Medications to treat bacterial infections such as clarithromycin, chloramphenicol, sulfonamides (e.g. sulfamethoxazole or antibiotics containing sulphonamides e.g. co-trimoxazole), tetracyclines and quinolone antibiotics
- Medications to treat fungal infections such as fluconazole, miconazole and voriconazole
- Medications used to treat gout such as allopurinol, probenecid and sulfinpyrazone
- Medications used to treat cancer such as cyclophosphamide
- Medications used to reduce weight such as fenfluramine
- Medicines used to treat depression such as fluoxetine, monoamine oxidase inhibitors (e.g. moclobemide and phenelzine)
- Medicines called anti-arrhythmic agents used to control abnormal heart beat (e.g. disopyramide)
- Medicines used to treat nasal allergies such as hayfever (e.g. tritoqualine)
- Non-steroidal anti-inflammatory drugs (NSAIDs) and salicylates to treat pain and/or inflammation such as aspirin, ibuprofen and naproxen
Medications taken with glimepiride which increase the risk of high blood sugar (hyperglycaemia). These include the following:
- Adrenaline, noradrenaline and similar medications
- Corticosteroids used to treat allergies and inflammation such as hydrocortisone, prednisolone
- Medications known as phenothiazines
- used to treat psychoses (severe mental health problems) such as chlorpromazine
- used to treat allergies, nausea and vomiting such as some antihistamines (e.g. cetirizine, promethazine)
- Medications used to treat epilepsy such as phenobarbital and phenytoin
- Medications used to treat increased eye pressure (glaucoma) such as acetazolamide
- Medications used to treat high blood pressure such as diazoxide and diuretics (water tablets), particularly thiazide diuretics e.g. bendroflumethiazide
- Medications used to treat very low blood sugar levels such as glucagon
- Nicotinic acid (used to lower cholesterol)
- Oestrogens and progestogens (e.g. oral contraceptives, hormone replacement therapy)
- Rifampicin, isoniazid (antibiotics used to treat tuberculosis)
- Thyroid hormones
Other medications which may interact with glimepiride include:
- Cimetidine and other H2 antagonists (for ulcers and indigestion)
- Beta-blockers e.g. propranolol (including eye drops) and reserpine used to treat high blood pressure
The warning signs of low blood sugar can be masked by beta-blockers and clonidine.
This information does not contain all possible interactions. Therefore, before using glimepiride, tell your doctor or pharmacist of all the products you use.
If you happen to have taken too much of this medication there is a danger of low blood sugar levels, and therefore you should instantly consume foods or drinks containing sugar (e.g. a small bar of sugar cubes, sweet juice,sweetened tea) and inform a doctor immediately.
When treating low blood sugar due to accidental intake in children, the quantity of sugar given must be carefully controlled to avoid the possibility of producing dangerously high blood sugar levels. Anyone in a state of unconsciousness must not be given food or drink.
Since a low blood sugar state may last for some time it is very important that the patiient is carefully monitored until there is no more danger. Admission into hospital may be necessary, also as a measure of precaution.
Show the doctor the package or remaining tablets, so the doctor knows what has been taken.
An overdose of glimepiride may cause headache, excessive hunger, sweating, nausea, vomiting, tiredness, sleepiness, feeling restless, feeling aggressive, difficulty in concentrating, tremor, confusion, speech and sight problems.
Severe cases of low blood sugar accompanied by loss of consciousness and coma are cases of medical emergency requiring immediate medical treatment and admission into hospital. It may be helpful to tell your family and friends to call a doctor immediately if this happens to you.
If you think you, or someone you care for, might have accidentally taken more than the recommended dose of glimepiride or intentional overdose is suspected, contact your local hospital, GP or if in England call 111. In Scotland call NHS 24. In Wales, call NHS Direct Wales. In the case of medical emergencies, always dial 999.
If you forget to take a dose, do not take a double dose to make up for the forgotten doses. | <urn:uuid:d09c9b46-e0ce-4ad4-b4ec-fffabf2ad614> | {
"date": "2014-11-01T12:22:40",
"dump": "CC-MAIN-2014-42",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1414637905860.51/warc/CC-MAIN-20141030025825-00099-ip-10-16-133-185.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.8958698511123657,
"score": 2.703125,
"token_count": 3365,
"url": "http://drugs.webmd.boots.com/drugs/drug-224-Glimepiride.aspx"
} |
Tuesday, March 22
Keynote Speaker: Marc Prensky, Global Future Education Foundation and Institute, USA
“PLAN B”: Education to Improve the World
ABSTRACT: Today, what the world offers our kids as education is pretty much the same everywhere — albeit with a wide range of quality and success. I call it “Plan A” education — it’s what all of us experienced. It is based on an “academic model” with the goal of “learning now, so you can accomplish later.” The curriculum is a narrow range of subjects — math, English/local language, science and social studies (“The MESS”).
Today, everything that goes on under the name of “education reform,” is about doing “Plan A” better — by by including more underserved kids, by adding STEM and the arts, by adding more and more technology, by adding new types of schools (e.g. charters), and by adding so-called “21st century skills.”
Unfortunately, “an improved Plan A”, is not what today’s and tomorrow’s empowered kids need — Plan A no longer fits the world in which our kids live. Tomorrow’s kids need — and want — a new “Plan B” education — one that further empowers them to make the world a better place by continuously accomplishing world-improving projects while they are still students, through a process of “accomplish now, learn as you do so.” Plan B education has NEW ENDS: (improving the world and becoming good, effective world-improving people), NEW MEANS: (real-world projects all through school), and NEW SUPPORT: (a new, broader curriculum, of Effective Thinking, Effective Action, Effective Relationships and Effective Accomplishment.) We are just beginning to see elements of “Plan B” emerging, in pockets, around the globe. This talk is about why “Plan B” is a far better education for tomorrow’s kids, what it looks like, and how to get there.
Q&A FOLLOWING KEYNOTE WITH STUDENT PANEL: http://www.edutopia.org/ikid-digital-learner-technology-2008
BIOGRAPHY: Marc Prensky — coiner of the term “Digital Native” 15 years ago — is an internationally acclaimed speaker and author in the field of education. He is currently the founder and Executive Director of the Global Future Education Foundation and Institute, a not-for-profit organization dedicated to promoting “Plan B” — a new educational paradigm of kids’ Improving the World and Becoming good, effective and world-improving people, through the means of Real-world Accomplishment supported by a curriculum based on Effective Thinking, Effective Action, Effective Relationships and Effective Accomplishment. (See global-future-education.org)
Wednesday, March 23
Keynote Speaker: Larysa Nadolny, Iowa State Univ., USA
EPIC WIN: Designing for success with game-based learning
ABSTRACT: Throughout history, games have engaged players of all ages in a shared experience of persistence, challenge, failure, and success. This has been achieved through a wide variety of gaming strategies and structures, from role play to puzzle and digital to paper. The recent popularity of designing curriculum with games in mind has shown that some game structures translate well to academic environments (EPIC WIN) while some are an EPIC FAIL. Larysa Nadolny will share her experience with designing and teaching in game-based learning environments and practical steps to get started with your own course. @GBLedu
BIOGRAPHY: Larysa Nadolny is an assistant professor in the School of Education at Iowa State University. Her research includes immersive technologies and design methodologies for education, particularly game-based learning, virtual reality, and augmented reality applications. She was recently awarded a Blackboard Catalyst Award and Director’s Choice for Courses with Distinction Award for the game-based learning design of a large undergraduate educational technology course. Dr. Nadolny’s teaching experience began as a middle school science teacher in Texas and has continued in Delaware, Pennsylvania, and Iowa in guiding current and future teachers with innovative technologies in the classroom. In 2014, Dr. Nadolny received an Early Achievement in Teaching Award at Iowa State University. You can read about her recent research and project with game-based learning at www.drnadolny.com.
Thursday, March 24
Keynote Speaker: Bob Hirshon, American Assoc. of the Advancement of Science, USA
Using Technology and Universal Design Principals to Reach Diverse Audiences
ABSTRACT: Technological innovations intended to make learning activities more accessible for children with disabilities can also make them better able to meet the needs of other children, with and without disabilities. In a recent study funded by the National Science Foundation, Hirshon et al examined a suite of learning activities, including computer games, hands-on demonstrations, art and creative writing challenges, and found that UDL-inspired changes improved effectiveness regardless of whether children had the disability being addressed. This presentation will share findings and ideas from the research, followed by a lively discussion of implications for education professionals and suggestions for future study.
BIOGRAPHY: Bob Hirshon is Program Director for Technology and Learning at the American Association for the Advancement of Science (AAAS) and host of the daily radio show and podcast Science Update. He is Principal Investigator on the NSF-funded project KC Empower, which examines how informal science activities can be made more accessible to children with disabilities. He oversees the Science NetLinks project for K-12 science teachers, providing free lessons, apps and other resources. Science NetLinks hosts over 400,000 user sessions per month. Hirshon’s Qualcomm Wireless Reach project, Active Explorer, allows educators to create mobile phone and tablet explorations for children, called Quests. Media that children collect on Quests download to their Active Explorer webpage, where they use creative tools to make SmartWork projects to share what they’ve learned. Hirshon also heads up Kinetic City, including the Peabody Award-winning children’s radio drama, McGraw-Hill book series and Codie Award- winning website and education program. He is a member of the Education and Public Outreach team for NASA’s MESSENGER project to planet Mercury, for which he developed planetary exploration tools that play within Google Earth. He curates and hosts the annual AAAS Science Film Showcase event, featuring the year’s best science films and videos and the producers who created them. He can be heard on XM/Sirius Radio’s Kids Place Live as “Bob the Science Slob,” where he discusses science and answers call in questions from kids. Hirshon is a Computerworld/ Smithsonian Hero for a New Millennium laureate.
Friday, March 25
Keynote Speaker : Yuhyon Park, Nanyang Technological University, Singapore
An Innovative Digital Citizenship Initiative in Singapore and Korea
ABSTRACT: The meteoric and unabated growth of technology’s impact on society has led to a fast-changing, volatile, and uncertain 21st century global landscape. It is predicted that, within 10 years, many current jobs will disappear and new types of jobs will emerge, especially in increasingly digital-oriented economies. Our children are at the center of this shift, and are starting to use digital technologies at increasingly younger ages. While digital technologies play a widespread role in children’s lives, there are serious risks to early usage of digital technologies. Many studies show that children’s exposure to digital technologies and media can also bring worrisome harmful effects including cyberbullying, technology addiction, inappropriate contents, and privacy concerns among others.
Korea and Singapore are two of the world’s most technologically advanced countries and have witnessed first-hand the harmful effects of digital technologies on children. The seriousness of these issues has caught the attention of society at large, and there are growing calls to prepare our children for success and safety in the digital world. In particular, the importance of teaching children about digital citizenship when they start using digital technologies and media is becoming a key priority in order to minimize risks and maximize potential.
In this talk, I will introduce the iZ HERO project, which is an innovative digital citizenship initiative for primary school children in Singapore and Korea. Research findings indicate that its interactive, transmedia approach effectively teaches young children about digital citizenship with improved outcomes. From a strategic perspective, the iZ HERO project in Singapore and Korea also provides a successful example of how multi-stakeholders including schools, ICT companies, NGOs, governments and universities can work together to enhance digital citizenship among young children through holistic online and offline tools including a school engagement programme, interactive exhibition, games and an online platform.
Dr. Yuhyun Park is a social entrepreneur and university researcher who founded infollutionZERO, a non-profit organization in Korea that is focused on raising public awareness of infollution (information pollution) such as cyberbullying and technology addiction, providing digital citizenship training for children, and shaping public policy on internet governance and safety.
In 2013, she was selected as an Eisenhower Fellow (Multi-National Program) as well as the first Ashoka Fellow representing Korea in recognition of her leadership in the social entrepreneurship sector. She has twice won international awards from UNESCO, including the UNESCO King Hamad Bin Isa Al-Khalifa Prize for Use of ICT in Education in 2012 and the Wenhui UNESCO Award for Educational Innovation in 2013, for her development of the iZ HERO program. iZ HERO is an innovative research-based educational program that teaches digital citizenship to children by using interactive digital media and comprehensive school engagement programs. She was also selected as a World Economic Forum Young Global Leader (2015) and was named a member of the Steering Committee of the World Economic Forum’s project Shaping the Future Implications of Digital Media for Society.
Tuesday, March 22
Invited Speaker : Michael Spector, Univ. of North Texas, USA
Smart Learning Environments: Concepts and Issues
ABSTRACT: There are two new journals in our field that involve the emerging notion of smart educational technologies. Earlier this year, the Smart Learning Institute at Beijing Normal University sponsored the Smart Education Conference. Related efforts in recent years involving adaptive technologies and personalized learning are also noteworthy. Given such interest in this area, it seems reasonable to consider what constitutes a smart learning environment or a smart educational technology. It is then interesting to see what is being done, what issues are emerging, and what successes in this area are likely to occur in the next few years. Rather than engage in exaggerated claims and predict dramatic transformation of learning and instruction, the emphasis will be on the potential, as yet largely unrealized, and the challenges confronting significant and sustained progress.
BIOGRAPHY: Michael Spector is Professor and Former Chair of Learning Technologies at the University of North Texas. He was previously Professor of Educational Psychology and Instructional Technology at the University of Georgia. Prior to that, he was Associate Director of the Learning Systems Institute and Professor of Instructional Systems at Florida State University. He served as Chair of Instructional Design, Development and Evaluation at Syracuse University and was Director of the Educational Information Science and Technology Research Program at the University of Bergen in Norway. He earned a Ph.D. in Philosophy from The University of Texas at Austin. His research focuses on intelligent support for instructional design, assessing learning in complex domains, and technology integration in education. Dr. Spector served on the International Board of Standards for Training, Performance and Instruction (ibstpi) as Executive Vice President; he is a Past President of the Association for Educational and Communications Technology as well as a Past Chair of the Technology, Instruction, Cognition and Learning Special Interest Group of AERA. He is editor of Educational Technology Research & Development and serves on numerous other editorial boards. He edited the third and fourth editions of the Handbook of Research on Educational Communications and Technology, as well as The SAGE Encyclopedia of Educational Technology. | <urn:uuid:d390749f-6ea2-447d-b78b-4ad316cd9f4c> | {
"date": "2018-02-23T20:13:50",
"dump": "CC-MAIN-2018-09",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891814833.62/warc/CC-MAIN-20180223194145-20180223214145-00256.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9401583671569824,
"score": 2.75,
"token_count": 2549,
"url": "http://site.aace.org/conf/keynotes/speakers-for-site-2016/"
} |
Ansel Adams in Yosemite Valley “Celebrating the Park at 150”
A deluxe, oversized book timed for the 150th anniversary of Abraham Lincoln’s signing of the Yosemite Grant, an event that laid the groundwork for the National Parks system.
Ansel Adams first visited Yosemite in 1916, at the age of fourteen, and returned every year throughout his life. It was in Yosemite that he fell in love with Western wilderness and became a photographer; he made more photographs at Yosemite than at any other place.
Roughly 150 breathtaking images are exquisitely reproduced in this large-format clothbound book. There are notable portraits of El Captain (the famous rock face whose Dawn Wall was recently free-climbed for the first time), Half Dome, Cathedral Rocks, Royal Arches, and other distinctive rock formations that frame the valley; grand views in all seasons and all states of weather; intimate details of nature from the Valley floor; the waterfalls–Bridaveil, Yosemite, Vernal, Nevada; studies of trees, from the giants of the Mariposa Grove to the exquisite white blossoms of the dogwood. There are gathering and clearing storms, snow and ice, bright sunshine, and the subtle shades of dawn and dusk.
The photographs have been selected and sequenced by Peter Galassi, former Chief Curator of Photography at The Museum of Modern Art, New York. His abundantly illustrated introduction sets Adams’s pictures within the rich history of imagery of Yosemite.
1 in stock | <urn:uuid:6eec8aa3-ba1a-45f2-a760-58ccda8c9007> | {
"date": "2017-11-24T13:01:15",
"dump": "CC-MAIN-2017-47",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934808133.70/warc/CC-MAIN-20171124123222-20171124143222-00456.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9440861344337463,
"score": 2.75,
"token_count": 311,
"url": "http://www.ivorypress.com/en/product/ansel-adams-in-yosemite-valley-celebrating-the-park-at-150/"
} |
Is the Death Penalty a Necessary Action?
The death penalty in the
In order to reach a compromise on the imposition of capital punishment, it is necessary to weigh all of the pertinent arguments on both sides of the issue. Death penalty proponents present compelling reasons as to why capital punishment is appropriate – the punishment should be commensurate with the crime; it should serve as a deterrent to others against future crimes; it is based on fundamental religious principles; and it is economically beneficial to the government and the taxpayers (Robinson). The punishment should suit the crime in order that our society and our system of justice may be maintained. Sentencing an individual to life in prison does not adequately redress the seriousness or the enormity of the crime of murder or acknowledge the value our society places on the protection of human life.
Opponents of this position argue that it is inherently wrong for the government to engage in murder itself in order to punish individuals who have committed murder. Opponents believe that capital punishment is in fact immoral and inhumane and is counter to the very foundations of our society. They strongly believe that the imposition of life in prison for capital crimes is the appropriate and ultimate punishment that should be imposed. They also show evidence of the cost of capital punishment being more expensive than the cost of sentencing one to life in prison.
One of the primary arguments raised by opponents is that if a person who may have been convicted of a crime and had the death penalty imposed, is later determined to have been innocent, the execution cannot be undone. Many people believe that sentencing a criminal to life in prison without parole is simply more moral. The inhumanity of capital punishment and its effect on society are cited as reasons why the death penalty should not be imposed.
The opponents of capital punishment even believe that the actual methods used to carry out these sentences are in and of themselves inhumane. The argument that a lethal injection is an inhumane way to kill a criminal would not be true if the drugs were researched further. Currently, the drug combination does not always work properly all of the time. Instead of numbing the victim and killing quickly, the drugs often simply paralyze the victim so that he or she cannot speak or show pain and they often die much slower than they should. Being electrocuted is a painful death as well. Sometimes the victim does not die the first time and has to be shocked again (deathpenaltyinfo). These instances show how inhumane the death penalty can be. If the technology was researched further and was more effective, then the death penalty could be considered more humane. Often it takes time for the lethal injection to take effect because of its being poorly administered by prison staff. Fortunately, doctors have recently been administering the lethal injection so that the process proceeds properly (Stillman). Many doctors support the death penalty and are not against administering a lethal injection to murderers. More involvement by trained doctors would allow for proper implementation of procedures.
Those favoring capital punishment also cite numerous instances where convicts have escaped and committed further murders and therefore believe that life without parole is not the answer. There are numerous instances where escaped convicts have murdered again. Prison escapes by convicted murderers have occurred over the years. Clyde Barrow and Bonnie Parker, a married couple in the 1930’s, were thought to have committed a combined 13 murders throughout 5 escapes (FBI History). Granted that prisons are much more secure today than in the 1930’s, it is still a fact that no prison can fully secure an inmate. In 2001, seven convicts escaped and murdered on Christmas Eve in
With proper prison security, it would eliminate the possibility that convicted criminals could escape and that, in fact, the incidence of prisoners escaping and continuing to commit capital crimes such as murder is relatively small and does not present a true threat to society. A criminal sentenced to life in prison without the chance of parole and housed in a maximum-security prison would not present any real threat to society.
One of the most critical debates over capital punishment centers around the issue as to whether or not it is a deterrent. The death penalty serves to deter criminals from killing because it presents them with the real possibility of the ultimate punishment (Robinson). An article in the New York Times on November 18, 2007, reports that, “according to roughly a dozen recent studies, executions save lives. For each inmate put to death, the studies say, 3 to 18 murders are prevented (Liptak).” The New York Times, a historically liberal newspaper, reporting this is surprising. Cass R. Sunstein, a law professor at the University of Chicago, said in the article, “the evidence on whether it has a significant deterrent effect seems sufficiently plausible that the moral issue becomes a difficult one…I did shift from being against the death penalty to thinking that if it has a significant deterrent effect it’s probably justified.” There are other arguments that also support execution as a deterrent. Michael Smerconish of the Huffington Post explains, “Roy Adler and Michael Summers, both professors at Pepperdine University, have recently analyzed the relationship between the number of U.S. executions by year and the number of murders in the year thereafter for 1979-2004…When executions leveled off, the professors found, murders increased. When executions increased, the number of people murdered dropped off. In a year-by-year analysis, Adler and Summers found that each execution was associated with 74 fewer murders the following year.” These two arguments for deterrence are backed up by considerable data. Opponents argue that often there is not a convincing amount of evidence or there is fabricated evidence to backup deterrent statistics.
These two articles are amongst a limited number that can be found supporting capital punishment as a deterrent. Opponents note that there are numerous articles that show studies of capital punishment not being proven to be a deterrent. These are often more convincing. These studies also usually show that life in prison proves to be an equal deterrent. Capital Punishment refers to Roger Hood’s statement that econometric analysis has not shown enough evidence to prove that capital punishment provides more deterrence than alternative penalties, such as life in prison (Hodgkinson). The debate as to whether it is a deterrent or not has not truly been proven by either side. There are convincing statistics and studies that could induce someone of either persuasion to believe in or to be against capital punishment as a deterrent.
For the most part, critics find arguments on both sides of the deterrence issue lacking real data. William J Bowers and Glenn L Pierce have written a critique of Professor Isaac Ehrlich’s research on capital punishment, concluding that he failed to produce any reliable evidence that the death penalty deters murderers (Bowers). They went on to say, “his data are inadequate for the purposes of his analysis and he misapplies the highly sophisticated statistical techniques he employs.” However, criticism goes both ways. In the New York Times article, scholars and even the author criticized the studies done for not producing enough conclusive evidence. Clearly, this demonstrates that deterrence can be argued either way and is not a convincing factor in the debate for or against capital punishment.
Proponents of capital punishment also want their opponents to realize that the Bible recognizes the death penalty as appropriate for a variety of crimes, including murder. Bible passages are still used to promote the retaining of capital punishment for murderers (Robinson). This goes against the religious arguments that many people use to oppose the death penalty. If the Bible states that the death penalty is necessary at times, then arguably there is little basis for people to oppose it on religious grounds. Executing convicted murderers when the circumstances warrant is the only appropriate way to render justice. Retribution is necessary when justice has been violated, and to fulfill it, the offender’s life may have to be taken. The primary biblical texts that refer to this argument are Genesis 9:5-6 and Romans 13:1-4, both of which emphasize this retributive aspect (Owens). Since many religious texts as well as laws from ancient times support capital punishment, it would seem appropriate. Some crimes are simply so atrocious that execution is the only reasonable response.
Death penalty advocates also argue that the penalty actually benefits the state and therefore the taxpayers economically. Once a convicted murderer is executed, they believe, there are no further maintenance costs as opposed to the enormous cost to the government for the housing, the health care, and the guarding of a criminal who is serving life in prison. Opponents of the death penalty aptly argue that this is not the case. As to the economic benefits to the government and the taxpayers, most statistics actually support life in prison over the death penalty. It is much more costly for society to attempt to impose the death penalty as a punishment than to impose a sentence of life in prison. In death penalty cases, on average, it costs about $470,000 in court and legal costs at the trial level. The appeals for death penalty cases versus other cases can add an additional $100,000. Petitioning these cases through the court system can add $137,000 (deathpenaltyinfo). Opponents argue that the money saved by not imposing the death penalty, but rather imposing life sentences, could be put toward increased prison security and in turn reduce the costs to taxpayers.
Clearly, capital punishment should be enforced only on a case by case basis and dependent on the nature of the crime and special circumstances of each case. Since it has not been absolutely proven as to whether capital punishment acts as a deterrent, it is unknown whether criminals are truly affected by the threat of the death penalty. The cost of the death penalty as well as its inhumane infliction makes it inappropriate to impose in all cases. However, given the fact that society is also safer from criminals without the potential to escape jail or eventually be let out on a technicality, capital punishment should continue to have its place in the justice system but only on a limited and well-founded basis. In certain cases, such as mass killings or when the accused murder is guilty beyond a doubt of a heinous crime, it may be necessary to enforce the death penalty.
Bowers, J William, Glenn L Pierce. “The Illusion of Deterrence in Issac Ehlrich’s Research on
Capital Punishment.” Yale Law Journal 85 (1975): 187-208. Googlescholar. 6 Dec 2007
Crilley, Jeff. “Seven escaped convicts still believed to be in Dallas, Texas, area.” CNN. 2 Jan
2001. 23 Nov 1007. <http://archives.cnn.com/2001/US/01/02/texas.escapees.02/>
Death Penalty Information Center. 19 November 2007.
“Famous Cases: Bonnie and Clyde.” FBI History. 10 June 2007. 2 Dec 2007.
Hodgkinson , Peter, William A schabas, eds. Capital Punishment: Strategies for Abolition. New
York: Cambridge University Press., 2004.
Liptak, Adam. “Does Death Penalty Save Lives? A New Debate.” New York Times. 18 Nov
2007, late ed.
Owens, Erik C, John D Carlson, Eric P Elshtain, eds. Religion and the Death Penalty: A Call for
Reckoning. Grand Rapids, MI: Eermans Publishing Co, 2004.
Robinson, Bruce A. “Capital Punishment – The Death Penalty.” Religious Tolerance. 1 Jan.
2006. Ontario Consultants of Religious Tolerance. 14 Nov. 2007. <http://www.religioustolerance.org/executb.htm>
Smerconish, Michael. “Death Penalty Deters.” Huffington Post. 11 Nov 2007. 14 Nov 2007.
Stillman, Jim. “We’ve come a long way? New and Nicer Ways to Kill the Bad Guys.” The
People’s Media Company. 18 May 2007. 23 Nov 2007. | <urn:uuid:6600566d-f454-4c16-9c0c-c102b28eca43> | {
"date": "2018-06-21T21:28:41",
"dump": "CC-MAIN-2018-26",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267864300.98/warc/CC-MAIN-20180621211603-20180621231603-00056.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9495103359222412,
"score": 2.96875,
"token_count": 2471,
"url": "http://mikemylan103.blogspot.com/"
} |
William Scarborough on David Walker
|Resource Bank Contents|
Q: What was the Southern reaction to Walker's Appeal?
A: Three states -- Georgia, Louisiana, and North Carolina -- as a direct result of Walker's Appeal, passed legislation making it a crime to teach slaves -- or in fact blacks, free or slave -- from being able to read and write. Eventually, all southern states except three of the border states passed similar legislation. And the fear was that publications like Walker's Appeal would get into the hands of either free or slave blacks, and that this would produce insurrectionary activity.
Walker's Appeal was followed shortly by the Nat Turner insurrection, and by also William Lloyd Garrison's publication of The Liberator in Boston, in January of 1831. (The Turner insurrection [was] in August of Ô31.) Many people saw a connection between the two. And the South will now begin, after the early thirties, to limit civil liberties in a major way, limiting freedom of press, freedom of the speech, trying to ban petitions to Congress and so on, in the mid-thirties, the so called "gag rule" and so on. So you have a severe limitation of civil liberty in the South after the early thirties, as a result of the Abolition Movement.
Southerners always liked to brag, especially to their northern compatriots, that they had no fear at all of their slave population; that they slept with their doors and windows open at night, and the idea of being a victim of slave frenzy of any kind was just ludicrous to them, and they couldn't understand why northerners did not understand that.
But in fact, southerners were well aware of the possibility of slave insurrection. They were well aware of the potential for disaster, which had been illustrated on the island of Santo Domingo in the 1790's, by the successful slave uprising led by Touissant L'Ouverture. They were well aware of that. You see references to it in the private correspondence and so on, all the time.
And therefore, when an actual insurrection occurred, they just basically went mad.
Professor of History
University of Southern Mississippi at Hattiesburg
Part 4: Narrative | Resource Bank Contents | Teacher's Guide
Africans in America: Home | Resource Bank Index | Search | Shop
WGBH | PBS Online | © | <urn:uuid:a7dc7fde-7034-4760-a982-4718aba8b3b2> | {
"date": "2016-02-11T19:37:24",
"dump": "CC-MAIN-2016-07",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701162648.4/warc/CC-MAIN-20160205193922-00069-ip-10-236-182-209.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9689661860466003,
"score": 3.0625,
"token_count": 498,
"url": "http://www.pbs.org/wgbh/aia/part4/4i2984.html"
} |
The Southwestern Historical Quarterly, Volume 26, July 1922 - April, 1923 Page: 15
The following text was automatically extracted from the image on this page using optical character recognition software:
The Indian Policy of the Republic of Texas 15
covered that the band had dispersed. After the rebels collected,
they evidently came to the conclusion that a successful revolution
was impossible and they gave up their plans."4 In October a band
of Mexicans and Indians were committing depredations on the
frontier. General Rusk, at the head of two hundred men, marched
to the Kickapoo village, where the marauders were encamped, and
on October 16 attacked and completely routed them.44
When the Regular Session of the Third Congress met Novem-
ber 5, 1838, it took active measures for the immediate relief of
the frontier situation. On November 6, a bill providing for the
appropriation of twenty thousand dollars to fit out two hundred
and fifty militia men, was signed by the president. These men
under the command of General Rusk were "to quell the insur-
rection now existing among the Indians and Mexicans." On
November 16, Houston signed three bills, which related to the
frontier situation. The first, authorized the president "to draw
upon the Treasury for the necessary funds to defray the expenses
of transporting arms, ammunition, troops," etc., etc., to the fron-
tiers of Texas for their protection. The second required the presi-
dent to issue "one hundred thousand dollars of Promissory Notes
of the Government," for purposes of frontier protection. The
third pledged the faith of Congress, that all citizens who volun-
teered in defense of "our exposed and suffering frontiers," would
be remunerated, and recommended that the citizens elect their
own officers, promising that Congress would ratify and legalize
all such elections.46
In order to carry out these plans General Rusk left Nacog-
doches on November 16, "for the purpose of visiting the counties
of Red River, and Fannin," to raise a force for the purpose of
attacking the villages of the Indians on the Three Forks of the
Trinity.47 Rusk proceeded to the Louisiana border, where he
found a company under Captain Tarrant about to attack the
Caddo Indians from the United States. It was believed that
48Yoakum, History of Texas, II, 245-246.
"Ibid., II, 247-24.8.
"Gammel, Laws of Texas, II, 3.
4"Ibid., II, 4-5.
47Manuscript: Thomas J. Rusk, to Secretary of War, December 1, 1838.
Indian Affairs, Texas State Library.
Here’s what’s next.
This issue can be searched. Note: Results may vary based on the legibility of text within the document.
Tools / Downloads
Get a copy of this page or view the extracted text.
Citing and Sharing
Basic information for referencing this web page. We also provide extended guidance on usage rights, references, copying or embedding.
Reference the current page of this Periodical.
Texas State Historical Association. The Southwestern Historical Quarterly, Volume 26, July 1922 - April, 1923, periodical, 1923; Austin, Texas. (texashistory.unt.edu/ark:/67531/metapth101084/m1/21/: accessed July 20, 2017), University of North Texas Libraries, The Portal to Texas History, texashistory.unt.edu; crediting Texas State Historical Association. | <urn:uuid:4856555d-44fa-4b91-bc0c-bbc4704d1cdf> | {
"date": "2017-07-20T22:57:13",
"dump": "CC-MAIN-2017-30",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549423512.93/warc/CC-MAIN-20170720222017-20170721002017-00456.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9128570556640625,
"score": 2.65625,
"token_count": 778,
"url": "https://texashistory.unt.edu/ark:/67531/metapth101084/m1/21/"
} |
||The examples and perspective in this article deal primarily with the United States and do not represent a worldwide view of the subject. (May 2014)|
Shrinking cities are dense cities that have experienced notable population loss. Emigration (migration from a place) is a common reason for city shrinkage. Since the infrastructure of such cities was built to support a larger population, its maintenance can become a serious concern. A related phenomenon is counter urbanization.
- 1 Definition
- 2 Theories
- 3 Interventions
- 4 Case study: Detroit, Michigan
- 4.1 History of Detroit's changing demographics
- 4.2 Causes of shrinkage
- 4.3 Effects of shrinkage
- 4.4 Mitigating shrinkage
- 5 Case study: New Orleans, Louisiana
- 5.1 History of New Orleans' changing demographics
- 5.2 Causes of shrinkage
- 5.3 Effects of shrinkage
- 5.4 Planning in response to shrinkage
- 6 Case study: Leipzig, Germany
- 7 Environmental justice
- 8 See also
- 9 References
- 10 External links
See also: Shrinking cities in the United States
The phenomenon of shrinking cities generally refers to a metropolitan area that experiences significant population loss in a short period of time. The theory is also known as counterurbanization, metropolitan deconcentration, and metropolitan turnaround. It was popularized in reference to Eastern Europe post-socialism, when old industrial regions came under Western privatization and capitalism. Shrinking cities in the United States, on the other hand, has been forming since 2006 in dense urban centers while external suburban areas continue to grow. Suburbanization in tandem with deindustrialization, human migration, and the 2008 Great Recession all contribute to origins of shrinking cities in the U.S. Scholars estimate that one in six to one in four cities worldwide are shrinking in countries with expanding economies and those with deindustrialization. However, there are some issues with the concept of shrinking cities, as it seeks to group together areas that undergo depopulation for a variety of complex reasons. These may include an aging population, shifting industries, intentional shrinkage to improve quality of life, or a transitional phase, all of which require different responses and plans.
There are various theoretical explanations for the shrinking city phenomenon. Hollander et al. and Glazer cite railroads in port cities, the depreciation of national infrastructure (i.e., highways), and suburbanization as possible causes of de-urbanization. Pallagst also suggests that shrinkage is a response to deindustrialization, as jobs move from the city core to cheaper land on the periphery. This case has been observed in Detroit, where employment opportunities in the automobile industry were moved to the suburbs because of room for expansion and cheaper acreage. Bontje proposes three factors contributing to urban shrinkage, followed by one suggested by Hollander:
- Urban development model: Based on the Fordist model of industrialization, it suggests that urbanization is a cyclical process and that urban and regional decline will eventually allow for increased growth
- One company town/monostructure model: Cities that focus too much on one branch of economic growth make themselves vulnerable to rapid decline, such as the case with the automobile industry in Flint.
- Shock therapy model: Especially in Eastern Europe post-socialism, state-owned companies did not survive privatization, leading to plant closures and massive unemployment.
- Smart decline: City planners have utilized this term and inadvertently encouraged decline by “planning for less--fewer people, fewer buildings, fewer land uses.”. It is a development method focused on improving the quality of life for current residents without taking those residents' needs into account, thus pushing more people out of the city core.
The shrinking of urban populations indicates a changing of economic and planning conditions of a city. Cities begin to 'shrink' from economic decline, usually resulting from war, debt, or lack of production and work force. Population decline affects a large number of communities, both communities that are far removed from and deep within large urban centers. These communities usually consist of native people and long-term residents, so the initial population is not large. The out flow of people is then detrimental to the production potential and quality of life in these regions, and a decline in employment and productivity ensues.
Social and infrastructural
Shrinking cities experience dramatic social changes due to fertility decline, changes in life expectancy, population aging, and household structure. Another reason for this shift is job-driven migration. This causes different household demands, posing a challenge to the urban housing market and the development of new land or urban planning. A decline in population does not inspire confidence in a city, and often deteriorates municipal morale. Coupled with a weak economy, the city and its infrastructure begin to deteriorate from lack of upkeep from citizens.
Historically, shrinking cities have been a taboo topic in politics. Representatives ignored the problem and refused to deal with it, leading many to believe it was not a real problem.Today, urban shrinkage is an acknowledged issue, with many urban planning firms working together to strategize how to combat the implications that effect all dimensions of daily life.
Former Socialist regions in Europe and Central Asia have historically suffered the most from population decline and deindustrialization. East German cities, as well as former Yugoslavian and Soviet territories, were significantly affected by the weak economic situation they were left in after the fall of socialism. The reunification of European countries yielded both benefits and drawbacks. German cities like Leipzig and Dresden, for example, experienced a drastic population decline as many people emigrated to western cities like Berlin. Hamburg in particular experienced a population boom with record production yields in 1991, after the unification of Germany. Conversely, Leipzig and Dresden suffered from a failing economy and a neglected infrastructure. These cities were built to support a much larger population, but now emulate ghost towns. Shrinking cities in the United States face different issues, with much of the population migrating out of cities to other states for better economic opportunities and safer conditions. Advanced capitalist countries generally have a larger population, so this shift is not as dangerous as it is to post-socialist countries. The United States also has more firms willing to rehabilitate shrinking cities and invest in revitalization efforts. For example, after the 1989 Loma Prieta earthquake in San Francisco in 1989, the dynamics between the city and its residents provoked change and plans achieved visible improvements in the city. By contrast, cities in Germany have not gotten the same attention. Urban planning projects take a long time to be approved and established. As of now, Leipzig is taking steps toward making the city more nature-oriented and 'green' so that the population can be first stabilized, and then the country can focus on drawing the population back into the city.
The observable demographic out-migration and disinvestment of capital from many industrial cities across the globe following World War II prompted an academic investigation into the causes of shrinking cities or, urban decline. Serious issues of justice, racism, economic and health disparity, as well as inequitable power relations are consequences of the shrinking cities phenomenon. The question is, what causes urban decline and why? While theories do vary, three main categories of influence are widely attributed to urban decline: deindustrialization, globalization, and suburbanization.
One theory of shrinking cities is deindustrialization or, the process of disinvestment from industrial urban centers. This theory of shrinking cities is mainly focused on post-World War II Europe. Following World War II, global economic power shifted from Western Europe to the United States. At this moment, manufacturing declined in Western Europe as it increased within the United States. The result was a shift away from Western European industrialization and a movement towards alternative industry. This economic shift is clearly seen through the United Kingdom’s rise of a service sector economy. With a shift in industry, however, many jobs were lost or outsourced. The result was urban decline and the massive demographic movement from former industrial urban centers into suburban and rural locals.
Post-World War II politics
Rapid privatization incentives encouraged under United States sponsored post-World War II economic aid polices such as the Marshall Plan and Lend-Lease program, motivated free-market, capitalist approaches to governance across the Western European economic landscape. The result of these privatization schemes was a movement of capital into American manufacturing and financial markets and out of Western European industrial centers. American loans were also used as political currency contingent upon global investment schemes meant to stifle economic development within the Soviet-allied Eastern Bloc. With extensive debt tying capitalist Europe to the United States and financial blockades inhibiting full development of the communist Eastern half, this “Cold War” economic power structure greatly contributed to European urban decline.
The case of Great Britain
Great Britain, widely considered the first nation to fully industrialize, is often used as a case study in support of the theory of deindustrialization and urban decline. Political economists often point to the Cold War era as the moment when a monumental shift in global economic power structures occurred. The former “Great Empire” of the United Kingdom was built from industry, trade and financial dominion. This control was, however, effectively lost to the United States under such programs as the Lend-Lease and Marshall Plan. As the global financial market moved from London to New York City, so too did the influence of capital and investment.
With the initial decades following World War II dedicated to rebuilding or, readjusting the economic, political and cultural role of Britain within the new world order, it wasn’t until the 1960s and 1970s that major concerns over urban decline emerged. With industry moving out of Western Europe and into the United States, rapid depopulation of cities and movement into rural areas occurred in Great Britain. Deindustrialization was advanced further under the Thatcher-ite privatization policies of the 1980s. Privatization of industry took away all remaining state protection of manufacturing. With industry now under private ownership, “free-market” incentives pushed further movement of manufacturing out of the United Kingdom.
Under Prime Minister Tony Blair, the United Kingdom effectively tried to revamp depopulated and unemployed cities through the enlargement of service sector industry. This shift from manufacturing to services did not, however, reverse the trend of urban decline observed beginning in 1966.
The case of Leipzig
Leipzig serves as example of urban decline on the Eastern half of post-World War II Europe. Leipzig, an East German city under Soviet domain during the Cold War era, suffered to receive adequate government investment as well as market outlets for industrial goods. With stagnation of demand for production, Leipzig began to deindustrialize as investment in manufacturing stifled. This deindustrialization, demographers theorize, prompted populations to migrate from the city center and into the country and growing suburbs in order to find work elsewhere.
The case of Detroit
Although most major research on deindustrialization focuses on post-World War II Europe, many theorists also turn to the case of Detroit, Michigan as further evidence of the correlation between deindustrialization and shrinking cities. Detroit, nicknamed “Motor City” because of its expansive automobile manufacturing sector, reached its population peak during the 1950s. As European and Japanese industry recovered from the destruction of World War II, the American automobile industry no longer had a monopoly advantage. With new global market competition, Detroit began to lose its unrivaled position as “Motor City”. With this falling demand, investment shifted to other locations outside of Detroit. Deindustrialization followed as production rates began to drop.
As evident from the theory of deindustrialization, political economists and demographers both place huge importance on the global flows of capital and investment in relation to population stability. Many theorists point to the Bretton Woods Conference as setting the stage for a new globalized age of trade and investment. With the creation of the International Monetary Fund (IMF) and World Bank in addition to the United States’ economic aid programs (i.e., Marshall Plan and Lend-Lease), many academics highlight Bretton Woods as a turning point in world economic relations. Under a new academic stratification of “developed” and “developing” nations, trends in capital investment flows and urban population densities were theorized following post-World War II global financial reorganization.
Product life-cycle theory
The product life-cycle theory was originally developed by Raymond Vernon to help improve the theoretical understanding of modern patterns of international trade. In a widely cited study by Jurgen Friedrichs, "A Theory of Urban Decline: Economy, Demography and Political Elites," Friedrichs aims to clarify and build upon the existing theory of product life-cycle in relation to urban decline. Accepting the premise of shrinking cities as result of economic decline and urban out-migration, Friedrichs discusses how and why this initial economic decline occurs. Through a dissection of the theory of product life-cycle and its suggestion of urban decline from disinvestment of outdated industry, Friedrichs attributes the root cause of shrinking cities as the lack of industrial diversification within specific urban areas. This lack of diversification, Friedrichs suggests, magnifies the political and economic power of the few major companies and weakens the workers’ ability to insulate against disinvestment and subsequent deindustrialization of cities. Ultimately, Friedrichs suggests that lack of urban economic diversity prevents a thriving industrial center and disempowers workers. This, in turn, allows a few economic elites in old-industrial cities such as St. Louis, Missouri and Detroit in the United States, to reinvest in cheaper and less-regulated third world manufacturing sites. The result of this economic decline in old-industrial cities is the subsequent out-migration of unemployed populations.
Recent studies have further built upon the product life-cycle theory of shrinking cities. Many of these studies, however, focus specifically on the effects of globalization on urban decline through a critique of neoliberalism. This contextualization is used to highlight globalization and the internationalization of production processes as a major driver causing both shrinking cities and destructive “development” policies. Many of these articles draw upon case studies looking at the economic relationship between the United States and China to clarify and support the main argument presented. Ultimately, the neoliberal] critique of globalization argues that a major driver of shrinking cities in “developed” countries is through the outflow of capital into “developing” countries. This outflow, according to theorists, is caused by an inability for cities in richer nations to find a productive “niche” in the increasingly international economic system. In terms of disinvestment and manufacturer movement, the rise of China’s manufacturing industry from United States outsourcing of cheap labor is often cited as the most applicable current example of the product life-cycle theory.
The migration of wealthier individuals and families from industrial city centers into surrounding “sub-urban” areas is an observable trend seen primarily within the United States during the mid to late 20th century. Specific theories for this flight vary across disciplines. The two prevalent cultural phenomenons of “white flight” and “car culture” are, however, consensual trends across academic disciplines.
“White flight” generally refers to the movement of large percentages of Caucasian Americans out of racially-mixed United States city centers and into largely homogenous suburban areas during the 20th century. The result of this migration, according to theorists studying shrinking cities, was the loss of money and infrastructure from urban centers. As the wealthier and more politically powerful populations fled from cities, so too did funding and government interest. The result, according to many academics, was the fundamental decline of urban health across United States cities beginning in the 20th century.
The product of “white flight” was a stratification of wealth with the poorest (and mostly minority) groups in the center of cities and the richest (and mostly white) outside the city in suburban locations. As suburbanization began to increase through to the late 20th century, urban health and infrastructure precipitously dropped. In other words, United States urban areas began to decline.
Mid-20th century political policies greatly contributed to urban disinvestment and decline. Both the product and intent of these policies were highly racial in orient. Although discrimination and segregation already existed prior to the passage of the National Housing Act in 1934, the structural process of discrimination was federally established with the Federal Housing Administration (FHA). The result of the establishment of the FHA was “redlining.” Redlining refers to the demarcation of certain districts of poor, minority urban populations where government and private investment were discouraged. The decline of minority inner city neighborhoods was worsened under the FHA and its policies. Redlined districts, ultimately, could not improve or maintain a thriving population under conditions of withheld mortgage capital.
Car culture and urban sprawl
In combination with the racial drivers of “white flight,” the development of a uniquely “American,” car culture also led to further development of suburbanization and later, “urban sprawl.” As car culture made driving “cool” and a key cultural aspect of “American-ness,” suburban locations proliferated in the imaginations of Americans as the ideal landscape to live during the 20th century. Urban decline, under these conditions, only worsened.
The more recent phenomenon of "urban sprawl" across American cities such as Phoenix and Los Angeles, were only made possible under the conditions of a car culture. The impact of this car culture and resulting urban sprawl is, according to academics, threefold. First, although urban sprawl in both shrinking and growing cities have many similar characteristics, sprawl in relation to declining cities may be more rapid with an increasing desire to move out of the poor, inner-city locations. Second, there are many similarities in the characteristics and features of suburban areas around growing and declining cities. Third, urban sprawl in declining cities can be contained by improving land use within inner city areas such as implementing micro-parks and implementing urban renewal projects. Ultimately, there are many similarities between urban sprawl in relation to both declining and growing cities. This, therefore, provides similar intervention strategies for controlling sprawl from a city planning point of view.
Different interventions are adopted by different city governments to deal with the problem of city shrinkage based on their context and development. Governments of shrinking cities such as Detroit and Youngstown have used new approaches of adapting to populations well below their peak, rather than seeking economic incentives to boost populations to previous levels before shrinkage and embracing growth models.
Green retirement city
Research from Europe proposes "retirement migration" as one strategy to deal with city shrinkage. The idea is that abandoned properties or vacant lots can be converted into green spaces for retiring seniors migrating from other places. As older individuals migrate into cities they can bring their knowledge and savings to the city for revitalization. Retiring seniors are often ignored by the communities if they are not actively participating in community activities. The green retirement city approach could also have benefits on social inclusion of seniors, such as urban gardening. The approach could also act as a “catalyst in urban renewal for shrinking cities”. Accommodations, in the meanwhile, have to be provided including accessibility to community facilities and health care.
Establishing a green retirement city would be a good approach to avoid tragedies like the 1995 Chicago heat wave. During the heat wave, hundreds of deaths occurred in the city, particularly in the inner neighborhood of the city. Victims were predominately poor, elderly, African American populations living in the heart of the city. Later research pointed out that these victims were socially isolated and had a lack of contact with friends and families. People who were already very ill in these isolated, inner neighborhoods were also affected and might have died sooner than otherwise. The high crime rate in the inner decaying city also accounted for the high rate of deaths as they were afraid to open their windows. Therefore, a green retirement city with sufficient community facilities and support would accommodate needs for elderly population isolated in the poor, inner city communities.
The idea of “right-sizing” is defined as “stabilizing dysfunctional markets and distressed neighborhoods by more closely aligning a city’s built environment with the needs of existing and foreseeable future populations by adjusting the amount of land available for development.” Rather than revitalize the entire city, residents are relocated into concentrated or denser neighborhoods. Such reorganization encourages residents and businesses in more sparsely populated areas to move into more densely populated areas. Public amenities are emphasized for improvement in these denser neighborhoods. Abandoned buildings in these less populated areas are demolished and vacant lots are reserved for future green infrastructure.
The city of Detroit has adopted right-sizing approaches in its “Detroit Work Project” plan. Many neighborhoods are currently only 10%-15% occupied. and the plan encourages people to concentrate in nine of the densest neighborhoods. Under the plan, the city performs several tasks including: prioritizing public safety, providing reliable transportation and demolition plans for vacant structures.
Although the "right-sizing" approach may seem attractive to deal with vast vacant lots and abandoned houses with isolated residents, it can be problematic for people who are incapable of moving into these denser neighborhoods. In the case of Detroit, although residents in decaying neighborhoods are not forced to move into concentrated areas, if they live outside designated neighborhoods they may not get public services they require. This is because communities in shrinking cities often are low-income communities where they are segregated. Such segregation and exclusion may “contribute to psychosocial stress level” as well and further add burden to the quality of living environments in these communities.
The idea of “smart shrinkage,” in some regards, is similar to dominant growth-based models that offer incentives encouraging investment to spur economic and population growth, and ultimately reverse shrinkage. However, rather than believing the city can return to previous population levels, the governments embrace shrinkage and accept having a significantly smaller population. With this model, governments emphasize diversifying their economy and prioritizing funds over relocating people and neighborhoods.
“Youngstown 2010” is an example of such an approach for the city of Youngstown, Ohio. The plan seeks to diversify the city's economy, “which used to be almost entirely based on manufacturing”. Tax incentive programs like Youngstown Initiative have also "assisted in bringing in and retaining investment throughout the city.” These practices help attract and preserve investment throughout the city. Since the plan was introduced, many major investments have been made in the city. The downtown Youngstown has been also transformed from a high crime rate area into a vibrant destination.
Nevertheless, there are concerns that the smart shrinkage approach may worsen existing isolation of residents who cannot relocate to more vibrant neighborhoods. Environmental justice issues may surface from this approach if city governments ignore the types of industries planning investment and neighborhoods that are segregated.
Lank banks are often quasi-governmental counties or municipal authorities that manage the inventory of surplus vacant lands. They “allow local jurisdictions to sell, demolish and rehabilitate large numbers of abandoned and tax-delinquent properties.” Sometimes, the state works directly with local governments to allow abandoned properties to have easier and faster resale and to discourage speculative buying.
One of the most famous examples of land banks is the Genesee County Land Bank in the city of Flint, Michigan. As an industrial city with General Motors as the largest producer, declining car sales with availability of cheap labor in other cities led to reduction in the labor force of the city. The main reason of the property or land abandonment problem in Flint was the state’s tax foreclosure system. Abandoned properties were either transferred to private speculators or became state-owned property through foreclosure, which encouraged “low-end reuse of tax-reverted land” due to the length of time between abandonment and reuse.
The Land Bank provides a series of programs to revitalize shrinking cities. In the case of Flint, Brownfield Redevelopment for previous polluted lands is controlled by the land bank to allow financing of demolition, redevelopment projects and clean up through tax increment financing. A “Greening” strategy is also promoted by using abandonment as an opportunity for isolated communities to engage in maintenance and improvement of vacant lots. In the city, there is significant reduction in abandoned properties. Vacant lots are maintained by the banks or sold to adjacent land owners as well.
Establishment of land banks could increase land values and tax revenues for further innovation of the shrinking cities. Nevertheless, The process of acquiring foreclosures can be troublesome as “it may require involvement on the part of several jurisdictions to obtain clear title,” which is necessary for redevelopment. Economic problems that local residents have, including income disparities between local residents, cannot be solved by the land bank, with the addition of increasing rents and land values led by the revitalization of vacant land. Local leaders also lack the authority to interrupt works that Land Banks do. Environmental justice problems that are from previous polluting industry may not be fully addressed through shrinking city intervention and without opinions from local people. Therefore, a new approach of dealing with these vacant lots will be to work with non-profit local community groups to construct more green open spaces among the declining neighborhoods to reduce vacant lots and create strong community commitments.
There are several other miscellaneous interventions that some cities have used to deal with city shrinkage. One of the interventions would be the series of policies adopted in the city of Leipzig in East Germany. They include construction of town houses in urban areas and guardian houses with temporary rental-free leases. Temporary use of private property as public spaces is also encouraged. Another intervention would be the revitalization of vacant lots or abandoned properties for artistic development and artists interactions such as the Village of Arts and Humanities in North Philadelphia, where vacant lots and empty buildings are renovated with mosaics, gardens and murals.
Case study: Detroit, Michigan
History of Detroit's changing demographics
Main article: Detroit
See also: Demographic history of Detroit
Economic changes, the Automotive industry in the United States, societal shifts, and political restructuring each affected demographic change and shrinkage in Detroit. The city's demographic history plays a key role in the present situation of depopulation in Detroit, and how it has become an issue of economic inequality and environmental justice. These political, economic, and societal shifts are outlined below.
From village to city
Though it began as a small village in 1701, Detroit was not a fully incorporated city until 1815. By 1820 the city had around 1,400 inhabitants, which increased to 2,200 by 1830. By 1850 the city's population of predominantly immigrants had reached 21,000, and by 1860 had surpassed 45,600 total inhabitants. This is in part due to Detroit being the final stop on the Underground Railroad, which diversified the city and brought many former African American slaves north, up until the American Civil War. Even though much of Detroit’s African American community was free, racial tensions ran deep among white landowners, and free African Americans often had to deal with slave hunters. In the late 19th Century, roughly half of Detroit’s population was foreign-born. Demographically, the city was home to numerous Austrians, Belgians, Canadians, Germans, English, Scottish, Polish, and Irish groups. Detroit’s residents also lived in clearly defined ethnic and economic neighborhoods from the beginning, segregating English and non-English speakers from Jewish and Catholic, for example. In 1825, the Erie Canal and a network of highways, including the Chicago Road, also brought in additional settlers while the city developed. Throughout this time, Detroit was slowly becoming a commercial center in the Midwest. The market for manufactured goods slowly grew, as the construction of railroads connected industrial leaders to other prosperous counties in the state, allowing the population to grow further. By 1854, the railroad connected Detroit to New York City, permanently opening the city’s commercial capacity. To support these opportunities, the city also modernized quickly, laying 242 miles of water pipe by 1883 to service the growing population, and contributing to suburbanization.
The Motor City
Industries native to Detroit, such as manufacturing cigars and kitchen ranges, were quickly overshadowed by the success of Detroit’s automobile industry, which allowed the city to explode via population growth. Other cars had been built already in other cities, but Ford’s concept of the moving assembly line “put the world on wheels”. The materialization of the automobile industry put Detroit on the map as a global industrial center, allowing dozens of companies to emerge and offer jobs in the automobile industry. This had significant implications for population. In 1900, the US Census reported the city’s population at 286,000, with over 115,000 of them working. By 1910, this number had grown to a total population of 466,000 with 215,000 working. During World War I, Detroit was a recipient of thousands of rural African Americans flowing into the city from the South, many of them taking jobs in burgeoning car factories. By 1920 over half of the population were immigrant generations, as Detroit was the nexus for many migrating populations.
Road engineers and industry workers flocked to Detroit with the passage of the Federal Aid Highway Act of 1921 (Phipps Act). The creation of interstate highways and well-maintained roads made it easier to build homes away from the urban city center, increasing urban sprawl and suburbanization that would later characterize the region. Just as the highways allowed for suburban sprawl, they also allowed for segregation between suburbanites (typically white) and working class city-dwellers (immigrants and African Americans) as workers could not afford to move away from the city center. As population boomed, close living quarters instigated racially charged conflicts, through "everyday expressions of white supremacy, and segregation". As thousands of workers moved into the city, many lived near the factories to be close to work and fellow union members, creating concentrated neighborhoods of working class citizens. These neighborhoods were often class and race based, perpetuating social stigmas surrounding race and ethnic group. Much of this racial and class-based segregation continues today. Debates over the trade union of the automobile industry also sparked conflict, turning the working class against many wealthy industry owners and cementing the class system that would come to characterize the population. Ford’s next conception of the automated assembly line changed city demographics, allowing unskilled labor to dominate the automobile industry and pushing skilled craftsmen out, usually further into the suburbs. The city developed at a rapid pace, with General Motors building offices adjacent to poor working class ghettoes, and developing the Detroit River to serve the needs of the auto industry.
The Great Depression
Main article: Great Depression
With the Great Depression, consumers quickly lost faith in the economic boom in Detroit, which was augmented by the eventual market collapse in October 1929. The huge population of Detroit began to deplete as people lost their homes, savings, and jobs. Few cities were hit as hard by the Depression as Detroit, and by July 1930, a third of the automobile jobs were lost. The decline in job availability increased homelessness, unemployment, foreclosures, and out-migration from the city. As working and living conditions declined, particular groups were often targeted for evictions. One of the many examples of homelessness and shrinkage that plagued Detroit came in 1935, First Lady Eleanor Roosevelt championed razing slum housing in Black Bottom, a low income, predominantly African American neighborhood in Detroit. More than 100 families were evicted with the promise of new public housing called the Brewster Homes. However, the housing program did not break ground for months, and was not fully replaced until the 1960s. In 1933, success was won by the working class through the enactment of the United Automobile Workers (UAW) union in car companies across Detroit. Unionization and the UAW’s push for social welfare programs helped to build a solid middle class in spite of the Depression. Unions could negotiate wages and benefits packages, setting the stage for other working class communities across the nation.
World War II and postwar era
Main article: United States home front during World War II
See also: Detroit#Postwar Era
The attack on Pearl Harbor shifted Detroit out of this first major economic slump. Along with other cities in the U.S., Detroit immediately became part of an all-out total war effort. Out-migration increased as men and women left to serve in WWII, while industry grew again in Detroit by using automobile factories to produce war items. Suddenly the problem was not unemployment, but housing shortages and overcrowding as men and women flooded into Detroit for work. Detroit’s second wind and prolific growth attracted more immigrants and newcomers from everywhere. The city’s physical growth was beginning to cause problems, and in 1927 the city limits were stretched to include 129 square miles. Housing was not built fast enough to accommodate the influx of new residents. Detroit’s director-secretary of the Detroit Housing Commission claimed, “There just isn’t such a thing as a place to rent in Detroit”. Government services were too limited to keep up with the rapid growth. Those who could moved to the more affluent neighborhoods of Indian Village, North Woodward, and Grosse Point on the outskirts of the city. Political restructuring and the introduction of new housing commissioners allowed for more public housing to be built.
After World War II, Detroit experienced another economic boom that was accompanied by increased social pressures. Automakers profited $1.1 billion between 1945 and 1950, a significant sum as the U.S. became a world superpower in the second half of the century. The ubiquity of cars and highways in Detroit encouraged further sprawl, and created segregated suburbs outside of the city limits. Martelle argues, "While Detroit was the region's population base, the majority of the new housing was being built outside the city limits...which were also where the new factories were being built". Detroit's automobile industry had inevitably driven immigration away from the congested city to cheaper land in the suburbs, taking much of the innovation with it. A degree of white flight was occurring, in which the white population moved to surrounding counties, leaving low income individuals (predominantly African American) in Detroit's city center.
Main article: Decline of Detroit
See also: Public housing in Detroit
In the 1950 mayoral elections, the city passed numerous policy decisions against urban infrastructure development, as Albert Cobo threatened to raze a slum to "offer the land for sale to private individuals for redevelopment". Cobo won the election, which had dramatic effects on public housing across the city. Cobo vetoed many of the proposed project plans, shifted the city's focus to "urban renewal projects that would eradicate slums", and replaced the Detroit Housing Commission with private developers. None of these plans included new housing for low income residents. In the 1950s, Mayor Cobo also refused federal funds that would have aided in public housing projects, further discriminating against low income communities. This decade concretized racial segregation and depopulation in Detroit. Housing remained segregated across the city, and more and more whites began to leave for the suburbs. The engineering of Interstate 94 in Michigan required many buildings in the city to be removed, including businesses and homes. The Big Three automobile manufacturers and other industries also implemented plans to build factories closer to consumer bases and reduce labor costs, causing a severe loss of employment opportunities within the city. In the 1970s, Detroit's economic base was exported overseas, with some 28% of equipment and plant investment done in foreign countries. Industry's move in away from Detroit "set the course for Detroit's economic collapse". As jobs were lost, economic inequality also increased, such that in 1959 the median income of a white family was $7,050, but for an African-American family was $4,370. In 1960, Detroit's population was around 1.5 million, 29% African-American and 70% white. In 2012, that number diminished to 701,475, with 82.7% African-American and 10.6% white residents
Causes of shrinkage
Main article: Depopulation
See also: Decline of Detroit
Detroit's depopulation did not occur overnight, but slowly over a series of decades. Shrinkage is also not a new phenomenon, and is often the converse side of growth. However, shrinkage in Detroit today occurs in stark contrast to the accelerated expansion of other large cities in America. Jianping and Yasushi argue that an initial cause of depopulation can be due to out-migration, lower fertility rates, and changes in residential preferences or accessibility that lead to suburbanization. The decline of employment as auto makers moved to cheaper property also caused depopulation, as residents moved away from the city or even state to seek jobs.
While issues such as out-migration and unemployment spurred shrinkage initially, many factors of shrinkage are ongoing and remain prevalent in Detroit. In the suburbs, increased residential development in "undesirable" land-use patterns occurs rapidly, such as land utilization for sprawl and the lack of high-rises. This threatens local ecosystems and land conservation and influencing further suburbanization. These suburbs are also indicative of racial segregation between suburb and city. In 2010 the city of Detroit's white population was only 10.61% of total residents, whereas African Americans made up 82.69% (see Demographic history of Detroit). This racial disparity implies a further cause to Detroit's shrinkage and the lack of opportunities for its remaining residents.
Another ongoing cause of shrinkage specific to Detroit is deindustrialization, primarily from Detroit's automobile industry moving to cheaper locations or overseas to Brazil and China. Other industries that supplied car parts to automakers also retreated, resulting in the loss of employment opportunities and financial crises for the entire city. With the Great Recession of 2008, the Big Three automakers - General Motors, Chrysler, and Ford Motor Company - were forced to apply for loans from the federal government in the Effects of the 2008–10 automotive industry crisis on the United States. The subsequent bankruptcy of the automobile industry in cities across Michigan lent themselves to dramatic unemployment and lack of financial investment in the Detroit, further contributing to depopulation. With white flight, waning industry left in Detroit, and a lack of social mobility for remaining residents, the effects of shrinkage were dire. From 2000 to 2004, Detroit lost 5.1% of its overall population, a dramatic decline in such a short period of time.
Effects of shrinkage
Both the initial and ongoing causes of shrinkage hold dramatic effects on the city and its remaining residents. Divisions between urban and suburban residences imply continued dramatic segregation that is self-perpetuating, resulting in a cycle where African American residents cannot afford to leave because property in the suburbs is too high. Simultaneously, because real estate in predominantly black neighborhoods often has a lower market value than white neighborhoods, white residents can sell their homes and leave for the suburbs, whereas home owners in black neighborhoods have a harder time selling their homes (see white flight). This depreciates African Americans' sense of socioeconomic status, as inner-city residents struggle to maintain their livelihoods without economic opportunities. Another issue that perpetuates this cycle is that many Detroit policy makers and municipal government workers refuse to see shrinkage as a problem, and avoid using the term altogether. Criticisms of the term run the gamut from shrinkage is a “natural” process to using population decline as a money-making scheme through “smart decline”. This is problematic because cities rely most heavily on their municipal governments, despite the cities’ lack of political might in state legislatures. Changes to institutions, demographics, and the economy have made non-government coalitions less reliable, which is a large problem faced by shrinking cities today. Many politicians also do not see depopulation as an environmental justice issue, such that the EPA has not yet gotten involved in shrinkage issues in Detroit.
In inner city Detroit, problems also arise when fewer residents inhabit buildings that cannot be maintained. Instead, these once-beautiful buildings and centers of commerce are slowly falling into disrepair, a sign of inadequate revitalization efforts. Revitalization has stalled due to municipal politicians' framing of the problem as suburban growth, rather than city shrinkage. Building vacancy, demolition of condemned buildings, and underuse of infrastructure are rampant in Detroit, each the effect of decreased population. Lower population density in cities like Detroit can also lead to brownfield land and land wastage, two phenomena that occur after an industry abandons its facilities and leaves behind potential contaminants or pollutants. Jianping and Yasushi include these as two main effects of city shrinkage in large metropolises:
- Brownfield land wastage is left in the city core, while expansion of suburbs consumes arable land and endangered ecosystems.
- Negative effects on residents' quality of life may occur due to lack of infrastructure, resources, community loss, and social segregation.
Increased suburbanization negatively impacts city residents due to lack of infrastructure and money devoted to the city. Other effects include loss of city services, loss of access to public goods due to a shrinking tax base, and loss of money flowing into the city to rebuild infrastructure. Each of these elements further affects the continued inequity between suburban and city residents, as well as the level of opportunity and quality of life available to each Detroit resident.
Social equity and environmental justice issues
Main article: Environmental justice
See also: Environmental racism
The causes and effects above lend insight into shrinkage as an environmental justice issue resulting from uneven development and investment disparities in urban planning. Economic development has been disproportionately skewed toward the suburbs instead of the city's core, in part because of political agency held by predominantly white suburbs. Many minority residents in the city's urban center lack the resources and political influence to achieve increased economic development. In cities like Detroit, there is no strong response to the deterioration of the city's core, primarily due to city planners' and politician's failure to prioritize the disadvantaged people and spaces. Policy makers are instead trying to limit urban sprawl and gentrify abandoned neighborhoods to make them more appealing to the predominantly white people that have moved away from the city. This has broader ripple effects in driving up rents and pushing original residents out of these neighborhoods. For example, the Detroit International Riverfront district near downtown has been historically used as a site for subsistence fishing, typically by resident African Americans. Redevelopment of this district in response to shrinkage through casinos, shopping, and expensive condos has led the city to restrict fishing on the riverfront, raising questions of environmental justice inequity. Traditional black communities that used this region for subsistence fishing are no longer able to do so, pushing them to seek other forms of subsistence in grocery stores or corner stores.
Another component of shrinkage in cities like Detroit is politicians who refuse to recognize and prioritize shrinkage as a problem, instead focusing on growth in other regions away from the city center. Funding is often redirected from the city center to the suburbs, encouraging further sprawl. This is due to a continued political thrust of Manifest Destiny, a concept that regional planners utilize in expanding suburban housing. According to this theory, if a city is not expanding, it is not considered "successful". Without a plan to address shrinkage, it is likely that the surplus of vacant infrastructure and buildings would persist, along with housing problems. Some theorize that if left unchecked, this trend of vacancy would continue outside of city limits, affecting the suburbs as well. However, this perspective turns shrinkage into a sort of disease with its host as low income communities and communities of color, rather than addressing the social and racial segregation that catalyzes it. An attempted response to shrinkage and misallocated funding was made by the city through Detroit's Neighborhood Stabilization Plan (NSP). The NSP was designed to address abandoned and condemned housing structures and received federal grant funding, but has not had much effect, failing to distribute funds evenly to historically poor neighborhoods. The NSP's distribution requirements include income status, but no data on race or gender, which are both important factors of shrinkage. The plan has had little effect on remediating vacancies and depopulation in the city's core.
Other problems include the presence of brownfields near residential areas within the city. Historically, industries placed their facilities near working-class neighborhoods to ensure a broad labor base. However, as Detroit's automobile industry outsourced and moved outside the city limits, these decrepit and polluting facilities remained adjacent to low income communities of color. The City of Detroit attempted to ameliorate these brownfields by creating the Detroit Brownfield Redevelopment Authority, "an authority providing incentives for the city to revitalize underdeveloped or under-utilized properties due to abandonment or environmental contamination"(see also Planning and development in Detroit). The city outlined three tenets in facilitating the project: brownfield tables and templates, in-house meetings, and coordination with the community. Some of the clean-up programs include turning abandoned lots and buildings into spaces for performances and art, as well as new business, retail, and apartment buildings. Many of the future building projects are expected to bring thousands of new jobs and billions of dollars to the city, helping revitalize depressed areas and assist original residents. Programs created by the DBRA are intended to create up to 10,000 new housing units in neighborhoods near brownfields, as well as preserve "greenfields" from development. However, much criticism has been raised about the effectiveness of this organization in serving the interests of surrounding neighborhoods. The DBRA could also be using this brownfield program as a means to gentrify areas of the city to attract new residents instead of providing support for long-term residents.
See also: Shrinking cities in the United States
Early efforts to mitigate shrinkage included the Detroit Vacant Land Survey of 1990. This included at attempt at de-densification, in which city residents and development were ushered into and concentrated into some neighborhoods while vacating others. In this model, residents would be moved into more densely populated areas that were already owned by the city. The Survey and resulting city planning were met by retaliation by residents, and failed. Hollander et al. suggest another model of de-densification includes dispersed vacancy, in which property owners are encouraged by the city to take title to surrounding vacant lots.
Today, city planners offer little guidance on "how to shrink a city," even when granted funding to help mitigate shrinkage. City planning policies are usually directed toward growth and new development, using tools such as "comprehensive planning, zoning, subdivision regulations, and urban growth boundaries". Since the beginning of the 2008 Great Recession, there have been some policies and financial investment in Detroit’s infrastructure. The Neighborhood Stabilization Plan (NSP) was proposed by the Planning and Development Department of the City of Detroit, and builds off of Congress' Housing and Economic Recovery Act of 2008. The NSP aimed to address the impacts of foreclosure in communities that were detrimentally affected by the economic crisis, and hopes to support market recovery and stabilize neighborhoods. However, this policy has been largely ineffective due to city government delays, poor infrastructural planning, and misallocation of funds.
Other efforts to mitigate shrinkage include using occupied-housing-unit density as a metric “to analyze changes in physical land use associated with population decline in urban neighborhoods”. This is a response to government proposed “smart decline,” a method aimed at gentrification and consolidating abandoned neighborhoods rather than addressing root causes of shrinkage. Smart decline has been met with strong opposition in Detroit, especially when city planners were told by outside observers how to "shrink gracefully". Many smart decline plans are inherently top-down, assume a "blank slate" at project sites, and require an uninvolved and quiet public opinion. This concept's exclusion of community opinion is largely why it has been met by so much opposition. However, Hollander and Németh have proposed a version of smart decline that is founded instead on ethics, equity, and social justice, including town hall meetings, public hearings, and the merits of potential solutions.
Another attempt to remedy the shrinkage problem has been to capitalize on population decline in Detroit's urban core for the benefit of remaining residents. Setting aside abandoned lots and green space for "recreation, agriculture, green infrastructure, and other non-traditional land uses will benefit existing residents and attract future development, and enable shrinking cities to reinvent themselves as more productive, sustainable, and ecologically sound places". While developing these lots into green space would benefit the community, many residents fear that it could result in increased development of high rises and office buildings, rather than community resources.
The future planning decisions for Detroit, as well as the future for environmental justice in relation to city planning and depopulation, are slowly improving. Much more research has been done on shrinking cities in the U.S., as well as in Detroit in particular. In order for more work to be done by the city government and non-government organizations, shrinkage must be seen as more than an issue of income, but an issue of race, gender, and household size. Problems of unemployment and racial discrimination exist as functions of environmental injustice, requiring policy makers to address inequality in funding distribution and housing availability for people of color. Organizations like Detroit Future City have begun work on carbon buffering, a "best practices" demolition program, and tree remediation.
Case study: New Orleans, Louisiana
History of New Orleans' changing demographics
French colonists founded the city of New Orleans in 1718, traveling from Mobile, Alabama to the Mississippi River. Because the area was a major hub for the slave trade, there was an increase of African slaves from 300 to 1,000 between the years of 1726 to 1732. By 1800, the population of the city included Anglo-Americans, Creoles, and enslaved Africans that were intermixed. However, a large demographic shift took place as a result the Louisiana Purchase of 1803. Driven by the 1791 slave insurgency of Saint Domingue and threat of war with England, the French sold 530,000,000 acres west of the Mississippi including New Orleans to the United States Government. After the purchase, over 10,000 refugees from Saint Domingue doubled the size of the city, changing the population to “1/3 white, 1/3 free people of color and 1/3 African slaves” by 1810. The city’s population expanded even more and experienced shifts between 1830 to 1860 when the city’s population grew three times as large, as a result of an influx of European immigrants whose population overtook the slave population. As a result of the end of the Civil War in 1865, the emancipated African American population in the city doubled to 50,000 by 1870. The subsequent Jim Crow laws segregated blacks to more low-lying and marginalized areas of the city, including black Creoles who previously resided in the French Quarter. However, by 1900 migration to New Orleans slowed, lowering it from the 5th to 12th largest among all American cities, placing it below the growth rate of Chicago, Detroit, and Milwaukee. This change took place largely because of New Orleans’ reliance on agricultural exports that did not require urban laborers. This decline was somewhat lessened by the railway that passed through the city and connected to ocean freighters.
Before the 20th century, New Orleans’ neighborhoods were of mixed race, but a combination of Jim Crow laws, racially segregated public housing, unequal educational opportunities, white flight, and unequal employment opportunities led to a racially segregated city. During this time, the city shifted from being tripartite to a biracial, meaning individuals were considered to be black or white. PhD Elizabeth Fussell explains how unequal opportunities based on race contributed to economic and geographic disparities in New Orleans over the years:
explicitly unequal treatment of those racial groups has been reproduced through an interlocking system of unequal educational opportunities, residential segregation, and blacks were consistently far less likely than whites to complete secondary school, even to present. The effect of racial educational inequity during the "human capital century" has been to diminish the labor-market opportunities and life changes of the individuals that lag behind.
Beginning in 1960, New Orleans began to see population loss, and just before Hurricane Katrina hit, New Orleans had almost twice as many people living below the poverty level than the national average and a 30% lower average income. The storm worsened population loss and changed the demographics of the city, resulting in an increased proportion of whites population and decreased proportion of black individuals in the years following.
Causes of shrinkage
Before Hurricane Katrina
New Orleans’ population began declining in the 1960s. From 1970 to 2000, the population shrank by 18%, and the region's employment rate fell from 66% to 42%. This resulted partially from the decline of key economic enterprises in the area. New Orleans relied on tourism, a part, and oil, while elsewhere in the United States cities experienced technologically driven growth. The oil boom that had once economically supported the city collapsed in 1979 and 1980, leading to job loss and the proliferation of depopulation. Many other changes in the area incentivized suburbanization, or movement from the city area to surrounding suburbs, including the development of interstate 10 in 1955, the draining of backswamps that were formerly uninhabitable, and federal tax incentives. In terms of demographics, this caused movement of whites into exclusive suburban areas eventually resulting in a majority of Black residents in the urban core of New Orleans, but not in the broader metropolitan area that included suburbs. Furthermore, historic school integration and Hurricane Betsy drove many whites from New Orleans neighborhoods. Racially segregated public housing, racial discrimination from lenders and realtors contributed to the racialized emigration of New Orleans residens.
After Hurricane Katrina
Compounded with years of poor economic growth, the largest contributor to the worsening population decline of New Orleans was Hurricane Katrina and Hurricane Rita, which resulted in the flooding of over 80% of the city. The storm displaced 800,000 people, which is the greatest displacement in the United States since the Dust Bowl. In 2010, the population of New Orleans was only at 76% of what it was in 2005.
Effects of shrinkage
Before Hurricane Katrina
Before Hurricane Katrina, New Orleans faced racial segregation and poverty levels above the national average. In 1960 it had the fifth highest level of poverty of all US cities, as well as some of the worst substandard housing. Poverty levels were at 24.5% on 2005, which were almost double the national level, and average income levels were at $30,711, which was about $16,000 less than the national average. Over the years, the city dealt with an aging infrastructure and gradual sinking of the city until portions were below sea level It was found that in inner city neighborhoods over 50% of children had levels of lead in their blood above federal guidelines.
From the beginning of the 20th century to 1980, New Orleans faced increasing residential segregation. Unemployment of blacks was as much as 10 times that of whites, and blacks paid a greater proportion of their income for housing than whites. In the 1970s, four times as many blacks had incomes placing them below the poverty level than whites, and by 1980 there was a belt along the undesirable backswamp of majority black residents. In addition to facilitating the growth of suburbs and advancing depopulation, the creation of Interstate 10 displaced many historically black neighborhoods including Tulane/Gravier, Tremé/Lafitte and the 7th Ward. As individuals with financial and social mobility moved to suburbs, the poor, who were disproportionately African American remained in low lying location within the city's core, making them especially susceptible to flood and storm damage. Although there are a variety of ways Hurricane Katrina and its aftermath affected groups differentially based on race and class, as journalist Eugene Robinson said, "Environmental injustice began long before Hurricane Katrina ever hit, in the basic pattern of settlement in the city."
After Hurricane Katrina
Hurricane Katrina and shrinkage of New Orleans had significant effects on various groups in the city. African Americans, renters, the elderly, and people with low income were disproportionately impacted by Hurricane Katrina compared to affluent and white residents. First, African Americans are less likely to have rental and homeowner's insurance and to have insurance with major companies as compared to whites, which is related to practices of racially based insurance redlining. Since people of color were more likely to rely on public transportation, the dependence of the evacuation on personal transportation impacted these individuals more than those with personal means to leave the area. Some academics highlight this disparity as an issue of climate injustice, which is the differential impact of climate change on certain groups, as scientists have shown that increases in activity of Atlantic hurricanes "are believed to reflect, in large part, contemporaneous increases in tropical Atlantic warmth...[and] have attributed these increases to a natural climate cycle termed the Atlantic Multidecadal Oscillation (AMO), while other studies suggest that climate change may instead be playing the dominant role."
The number of homeless people in the city was at double the pre-Katrina rate by 2008, and African Americans faced discrimination in housing transactions, finding inferior treatment based on race. The Federal Emergency Management Agency (FEMA) gave some evacuees from Katrina and Rita trailers that were contaminated with formaldehyde, a mistake that took over two years to fix. The National Fair Housing alliance showed in a report that information about units was withheld from or differentially given to African Americans compared to whites, and pointed out examples of racist practices of landlords and online advertisements. Even after federal and state governments spent $4 billion on revitalization efforts to repair levees, black New Orleans is disproportionately endangered by future flooding. In the first four months after the storm, the city's white population rose while the black population declined. As of 2006, evacuees that were African American were 5 times more likely to be unemployed when compared to evacuees that were white. T Of additional concern are the effects on children, who face four times the risk of having serious symptoms of emotional disturbance than comparable children. Moreover, while many parents feel that their children need professional help, the majority of them did not get it or have access to it. As of 2007, only 40% of school children had returned to public school.
These disparities bring up issues with environmental racism as well as environmental justice as a whole, which is defined "defined in terms of the distribution (or maldistribution) of environmental goods and bads."
Planning in response to shrinkage
Before Hurricane Katrina
There is little evidence that New Orleans was planned and developed with a specific focus on shrinkage as a phenomenon. This is problematic, because even though the metropolitan area as a whole was growing, there was simultaneously an increase in suburbanization. Many long-time residents and their needs were left out of planning decisions due to race and socioeconomic status.
After Hurricane Katrina
Planning for shrinkage as a result of Hurricane Katrina focuses most of its attention on reconstructing the city after the damage of the storm. There are a variety of methods proposed by academics, communities and governing bodies to develop New Orleans in the aftermath of shrinkage as well as Hurricane Katrina. A large part of the dominant planning narrative seeks to make New Orleans “bigger and better” while still decreasing the overall size of the city. Through the creation of various commissions, many ideas have been proposed and have incited controversy. Many planners agree that part of the effort to revitalize the area must not render the residents vulnerable to the effects of another similar hurricane. In the wake of Katrina, much planning focused on rapid reconstruction of some areas, with proposals for temporary prohibition were rejected by most residents. There are four main planning efforts in response to Hurricane Katrina's damage.
Bring New Orleans Back Commission
The Bring New Orleans Back Commission (BNOB), created by Mayor Ray Nagin in January 2006, consisted of "professional planners and designers" including the Urban Land Institute (ULI). Some of the BNOB policies included shrinking the city's footprint and conversion of some neighborhoods to parks and wetlands through buyouts from the government. This fueled an outcry from the public, who resisted the idea of being prevented from returning to their homes. This was especially true for African American residents, because "they were much more likely than whites to live in flood prone areas."
New Orleans Neighborhood Rebuilding Plan
In November 2006, the New Orleans Neighborhood Rebuilding Plan was approved by city council. It differed from BNOB as it mainly drew from the community and "was based on the assumption that all areas of the city would be rebuilt," and planned to recover every neighborhood. One of their proposals was a "lot next door" policy, in which property owners were given first priority in purchasing adjacent homes.
Unified New Orleans Plan
Funded mainly by philanthropy from the Rockefeller foundation and the Bush-Clinton Katrina Fund, the Unified New Orleans Plan was developed in 5 months. According to America Speaks, the process had "unprecedented levels of citizen engagement...established credability...built a constituency committed to work...[and] helped restore hope."
Office of Recovery Management
Beginning in January 2007, the Office of Recovery Management was funded by the city as well as foundations, and was created to develop a strategy for recovery. It was headed by Edward Blakely, who promoted strategies such as "trigger projects" in 17 target areas to drive development, and the creation of target areas determined to be renew, redevelop or rebuild zones. As of 2007, the project was encountering problems with allocating funds assured by the Louisiana Recovery Authority and FEMA.
Case study: Leipzig, Germany
Leipzig was on its way to becoming a metropolis that attracted people from all over the Europe due to its centralized location. It was one of the major European centers of learning and culture in music and publishing. After World War II, it became the major urban and trade center within East Germany. Because Leipzig played a large role in the decline of Communism in East Germany, the city itself also suffered economic drawbacks and declines resulting in a change in landscape, infrastructure and geography. Today, Leipzig is considered a ‘shrinking city’ because of its rapid population decline and infrastructure devastation. While Leipzig is one of the most livable cities in East Germany and is a prominent cultural center, it is still suffering the repercussions of the last World War, economically, politically and socially.
Causes of shrinkage
After World War II, the local economy in Leipzig was deflated and the regional economy desperately needed to be revitalized. The fall of socialist East Germany led to a mass emigration to other areas of the country. The housing vacancy had risen to 20%, and renovations in the older neighborhoods were needed. The post socialist conditions are partly to blame for the state in which many east German cities find themselves in. Because east Germany was a socialist country merging with a west-European capitalist country, it has a unique situation. Lingering socialist ideals competed with very democratic western philosophy, causing issues for old and new generations. Integrating east Germany into West Germany has been problematic because of regional variations due to differences in economic performance in the recent years. The whole area suffers from problems like joblessness, population loss, and a lack of renovation. In the case of Leipzig specifically, a dramatic loss of manufacturing jobs and mass out-migration has accelerated the shrinking process. Loss of stock in commercial real estate and housing stock produced a problematic picture to the whole of Germany and Europe in general. The transition from 40 years of socialist rule to a democratic-capitalist economy was not as smooth as expected. The loss of economic and socio-cultural importance did not fare well for Leipzig, or Germany as a whole. Becoming apart of the German Democratic Republic lead to the loss of several national functions of the former unitary state.
Effects of shrinkage
In 1949, when Leipzig was officially a part of the German Democratic Republic, the city underwent many infrastructure changes. National and regional industries like banks and the publishing media, left Leipzig for the capitalist western Germany. The German Democratic Republic prioritized pre-war traditions rather than investing in innovations; the city of Leipzig was to become the industrial center with large scale mining, which caused pollution in the air and the national and cultural landscape of the city was drastically changed. Villages were torn down to make room for the new machinery and coalfields. This caused inhabitants to leave the area, and at the end of GDR era in 1989, the population shrunk down to 530,000. Some neighborhoods were neglected because of their past-socialist nature. These infrastructures then began to disintegrate and there were no initiatives established to upkeep them. Nobody wanted to be reminded of communist times. 75% of the local industry was closed down within the years after the reunification of Germany. Small companies took over parts of the state, but it was not noticeable due to the large out flux of industry years earlier. There was huge environmental damage and social costs. The city had turned into a coal producing state, but the energy value of coal was poor, so continuation of coal mining was not economical. The city was destroyed by landscape devastation and air pollution, and the industry was not thriving and would not be continuing in the area. The physical infrastructure of roads, buildings and public transportation was designed for much larger population then is present, they are too large to represent the current population. Currently, the population is stagnant.
Policies and investment decisions
Urbanization is a cyclical process and urban and regional decline will make way for new growth. The Fordist model of industrialization was employed, which was characterized by mass standardized production and mass employment in large complexes. This mass production technology created increases in output due to specialization of jobs based on individual workers' skills and top-down management. While the output was unprecedented, this put added pressure on wages causing stagnation. There was an unbalanced relationship between consumption and production in that there was not enough payoff for the working class to incentivize working hard.
Human capital, creativity and competitiveness are key to successfully industrializing the city. Neglected areas of the city were ignored because of their connection to former socialist memories. Instead of renovating, large city centers were built away from the center of the city. These large buildings fell victim to the effects of neglect like decay. Large construction and renovation projects were meant to strengthen the optimistic view on the future of Germany. Suburban housing projects were constructed around the city so the houses could be offered to those who left the city earlier. The goal was to offer housing opportunities so that the population would come back and the regional population would not decline too drastically.
There was a policy of ‘restitution before financial compensation’ which had an adverse effect on inner city renovation projects. This was an accelerated out-migration to the suburbs and countryside which resulted in many empty houses in the city despite their perfect state of condition after renovation. Stimulating suburbanization increased the already too large housing stock in the region and failed to mitigate the inner city vacancy problems. The city would do best if the development policy was aimed at modest economic growth and employment was created to stabilize the urban population and prevent a new exodus to the west.
After the fall of East Germany, the population of the German Democratic Republic suffered a decline. After the Second World War, there was an out-migration of 3.8 million people from East Germany to West Germany. This was so drastic that East Germany would eventually cease to exist. The rest of Germany felt the only option was to cut off ties completely with Eastern Germany. However, the shift from communist to capitalist gave new employment opportunities in retail, transportation and communication, but this only accounted for a small amount of the economy that was lost. There were not enough people living in the city to make use of these opportunities. Unemployment was not too high at 20%, but still higher than other cities in West Germany and other EU countries. The working-class neighborhoods fell victim to the large scale vacancy.
Efforts to mitigate shrinkage
The new policies are shifting their focus toward stabilizing the current population and adjusting the housing stock to the population size. The city has developed a strategy to adapt housing stock and infrastructure according to a ‘shrinking city’ model. There will be demolition of housing that were neglected or there is too little demand for living. These neglected areas are then replaced with parks or squares. The goal is to turn the area into a more attractive and greener, nature-oriented environment (See also gentrification). This in turn encourages a more spacious livelihood and leads to a higher quality of life, which could potentially attract urban dwellers to these remodeled neighborhoods, as well as land developers.
Stimulating owner-occupancy is another important strategy that strengthens the ties between the city and its inhabitants. There is a lack of clients and purchasing power for retailers, which causes problems especially for working-class neighborhoods. There is a public hearing that is held to brainstorm strategies for solving this problem, like focusing exclusively on physical solutions like improvements of housing stock and public space. Long-term monitoring of sustainable urban open space development at the city district level will also be helpful in the long run. Concrete socio-economic evidence of the benefits that green spaces bring to shrinking cities would be a valuable contribution to the rehabilitation of a shrinking city like Leipzig.
A coherent urban development strategy is hard to find because there are many policy plans, each dealing with different parts of the problem. The government is not to blame completely for the state Leipzig is in currently, as there are many facets of the shrinking city problem that need to be addressed. The government has not failed to find the right solutions, or been too optimistic in their judgements. Leipzig has a bad reputation in other parts of West Germany, which is where most investors come from. Leipzig can claim that they have branches of large office of West German companies like BMW and Porsche, but this does not result in enough jobs or opportunities to push the unemployment rate back to an acceptable level. The politicians and investors should not see the shrinking of the city as negative or a problem that cannot be solved; but that it offers things that other cities do not have like space for innovation, creative subcultures, cheap start-up rents, and an increase of green space for making the city more attractive. The attempt to get the 2012 Olympics to take place in Leipzig brought renewed recognition for the discussion of more building projects. For now, the city needs to focus on stabilizing the current population and lowering the unemployment rate by increasing opportunities within the city.
A rapidly contracting population is often viewed holistically, as a citywide and sometimes even regional struggle. However, shrinking cities – by their nature and how local officials respond to the phenomena – can have a disproportionate social and environmental impact on the less fortunate, resulting in the emergence of issues relating to environmental injustices.This paradigm was established almost immediately after cities started shrinking in significance during the mid-20th century and persists today in varying forms.
Although the concept of environmental justice – and the movement it sparked – was formally introduced and popularized starting in the late 1980s, its historical precedent in the context of shrinking cities is rooted in mid-20th century trends that took place in the United States.
In an American context, historical suburbanization and subsequent ill-fated urban renewal efforts are largely why the very poor and people of color are concentrated in otherwise emptied cities, where they are adversely plagued by conditions which are today identified as environmental injustices or environmental racism. These conditions, although created and exacerbated through mid-20th century actions, still persist today in many cases and include: living in close proximity to freeways; living without convenient access, if any, to healthy foods and green space. Unlike white people, people of color were socially and legally barred from taking advantage of federal government policy encouraging suburban flight. For example, the early construction of freeways coupled with practices such as “red lining” and racially restrictive covenants, physically prevented people of color from participating in the mass migration to the suburbs, leaving them in – what would become – hollowed and blighted city cores. Because income and race are deeply embedded in understanding the formation of suburbs and shrinking cities, any interventions responding to the shrinking city phenomenon will almost invariably confront issues of social and environmental justice. This is not the case in Europe, where suburbanization has been less extreme, and drivers of shrinking cities are also more closely linked to aging demographics, and deindustrialization.
In addition to discriminatory policy-driven decisions of the past, which caused cities to contract in population and created inhospitable living conditions for the poor and people of color in urban cores, environmental justices concerns also arise in present initiatives that seek solutions for cities struggling with considerable population losses.
New Orleans, like many major American cities, saw its population decrease considerably over the latter half of the 20th century, losing almost 50% of the population from its peak in 1960. In large part because of "white flight" and suburbanization, the population loss perpetuated existing racial segregation and left people of color (mostly African Americans) in the city center. By 2000, vacant and abandoned properties made up 12% of the housing stock. The city was struggling economically and in the wake of Hurricane Katrina, 134,344 of 188,251 occupied housing units sustained reportable damage, and 105,155 of them were severely damaged. Because of historical settlement patterns formed by racial restrictions in the first half of the 20th century, African Americans were disproportionately impacted by the destruction.
Responding to Hurricane Katrina, New Orleans Mayor C. Ray Nagin formed the Bring New Orleans Back Commission in September 2005. The goal of the commission was to assist in redevelopment decision-making for the city. The commission shared its proposal for redevelopment in January 2006, however it faced some criticism related to environmental justice concerns. The commission's proposal was presented prior to many residents having returned to the city and their homes. The process was not very inclusive, particularly with locals of impacted areas, who were predominately from disadvantaged communities. While the proposal addressed future potential flooding by incorporating new parks in low-laying areas to manage storm water, the locations of the proposed greenspaces required the elimination of some of the low-income neighborhoods. Residents largely viewed the proposal as forced displacement and as benefitting primarily more affluent residents. Ultimately the proposal was roundly rejected by residents and advocates for residents.
A later intervention to alleviate the mounting abandonment and blight (which existed prior to Katrina, but was exacerbated by the disaster) was Ordinance No. 22605, enacted by the New Orleans city council in 2007. The rationale for the ordinance was to allow the city to establish a "Lot Next Door" program, which seeks to “assist in the elimination of abandoned or blighted properties; to spur neighborhood reinvestment, enhance stability in the rental housing market, and maintain and build wealth within neighborhoods.” The program intended to give owner occupants the opportunity to purchase abutting properties (city acquired properties formerly state-owned or owned by the New Orleans Redevelopment Authority) as a means of returning properties to neighborhood residents. It later expanded to allow any individual to purchase a property if that person or a family member would live there. The impact of the program, however, was unevenly distributed throughout the city. Although black neighborhoods in the low-laying topographical regions were hit the hardest by Katrina, affluent neighborhoods with high rates of owner occupancy better absorbed vacant and abandoned properties than areas with more rental units.
Perhaps the city most commonly associated with the concept of "shrinking cities," Detroit too has grappled with issues of environmental justice. Detroit's current circumstances, as it struggles to deal with a population less than half of that from its peak in 1950, are partially the direct result of the same racist process which left only the poor and people of color in urban city centers. The city presently faces economic strain since only six percent of the taxable value of real estate in the tri-county Detroit area is in the city of Detroit itself, while the remaining ninety-four percent is in the suburbs. In recent years the city has made attempts, out of necessity, to address both its economic and population decline.
In 2010, Detroit mayor David Bing introduced a plan to demolish approximately 10,000 of an estimated 33,000 vacant homes in the city because they were “vacant, open, and dangerous.” The decision was driven by the reality that due to financial constraints, the city's existing resources simply could not maintain providing services to all areas. However, the decision also reflected a desire to “rightsize” Detroit by relocating residents from “dilapidated” neighborhoods to “healthy” ones. The idea of rightsizing and “repurposing” Detroit, however, is a contentious issue. Some locals are determined to stay put in their homes while others compare the efforts to past segregation and forced relocation. Mayor Bing clarified that people would not be forced to move, however residents in certain parts of the city "need to understand they're not going to get the kind of services they require."
In addition to "right-sizing" Detroit as a means to deal with a massively decreased city population and economic shortfall, Mayor Bing also undertook budget cuts. Although often necessary and painful, certain cuts, such as those to the city's bus services can produce harms in an environmental justice framework. In Detroit, despite the city's massive size and sprawl, roughly 26% of households have no automobile access, compared to 9.2% nationally. From an environmental justice perspective this is significant because a lack of automobile access, coupled with poor transit and historic decentralization, perpetuates what is often referred to as a “spatial mismatch.” While wealth and jobs are on the outskirts of the metropolitan region, disadvantaged communities are concentrated in the inner-city– physically far from employment without a means of getting there. Indeed, almost 62% of workers are employed outside the city limit and many depend on public transit. Some contend that for Detroit this situation should more specifically be termed a “modal mismatch” because the poor of the inner-city are disadvantaged because they lack automobile access in a region designed for automobiles. Regardless of name, the situation is little different and still embedded in historic racial and environmental injustices; the poor are clustered in an inner-city due to past policies, which were often racially discriminatory, and cuts to public transportation reduce job accessibility for the many households in Detroit that lack automobile access.
- Climate justice
- Environmental justice
- Environmental racism
- Shrinking cities in the United States
- Pallagst, K. (2009). "Shrinking cities in the United States of America: Three cases, three planning stories". The Future of Shrinking Cities 1: 81–88.
- Frey, William (1987). "Migration and Depopulation of the Metropolis: Regional Restructuring or Rural Renaissance". American Sociological Review 52 (2): 240–287. doi:10.2307/2095452.
- Bontje, M. (2005). "Facing the challenge of shrinking cities in East Germany: The case of Leipzig". GeoJournal 61 (1): 13–21. doi:10.1007/sgejo-004-0843-7.
- Hollander, J.; J. Németh (2011). "The bounds of smart decline: a foundational theory for planning shrinking cities". Housing and Policy Debate 21 (3): 349–367. doi:10.1080/10511482.2011.585164.
- Maheshwari, Tanvi. "Redefining Shrinking Cities. The Urban Fringe, Berkeley Planning Journal".
- Hollander, J. (2010). "Moving Toward a Shrinking Cities Metric: Analyzing Land Use Changes Associated with Depopulation in Flint, Michigan". Cityscape 12 (1): 133–152.
- Glazer, Sidney (1965). Detroit: A Study in Urban Development. New York: Bookman Associates, Inc.
- Martelle, Scott (2012). Detroit: A Biography. Chicago, IL: Chicago Review Press.
- Schteke, Sophie; Dagmar Haase (September 2007). "Multi-Criteria Assessment of Socio-Environmental Aspects in Shrinking Cities. Experiences from Eastern Germany". Environmental Impact Assessment Review 28: 485.
- Schteke, Sophie; Dagmar Haase (September 2007). "Multi-Criteria Assessment of Socio-Environmental Aspects in Shrinking Cities. Experiences from Eastern Germany". Environmental Impact Assessment Review 28: 484–485. doi:10.1016/j.eiar.2007.09.004.
- Harms, Hans. "Changes on the Waterfront-Transforming Harbor Areas".
- "Who Makes It?". Retrieved 28 November 2011.
- Clark, David. Urban Decline (Routledge Revivals). Hoboken: Taylor and Francis, 2013.
- Couch, Chris, Jay Karecha, Henning Nuissl, and Dieter Rink. “Decline and sprawl: an evolving type of urban development – observed in Liverpool and Leipzig.” European Planning Studies 13.1 (2007): 117-136.
- Lang, Thilo. “Insights in the British Debate about Urban Decline and Urban Regeneration.” Leibniz-Institute for Regional Development and Structural Planning (2005): 1-25.
- Friedrichs, Jurgen. “A Theory of Urban Decline: Economy, Demography and Political Elites.” Urban Studies 30.6 (1993): 907-917.
- Rall, Emily Lorance, and Dagmar Haase. “Creative intervention in a dynamic city: A sustainability assessment of an interim use strategy for brownfields in Leipzig, Germany.” Landscape and Urban Planning 100.3 (2011): 189-201.
- Sugrue, Thomas J. The origins of the urban crisis: race and inequality in postwar Detroit. Princeton, N.J.: Princeton University Press, 1996.
- Rappaport, Jordan. “U.S. Urban Decline and Growth, 1950 to 2000.” Economic Review (2003): 15-44.
- Vernon, Raymond. “The Product Cycle Hypothesis In A New International Environment.” Oxford Bulletin of Economics and Statistics 41.4 (1979): 255-267.
- Martinez-Fernandez, Cristina , Ivonne Audirac, Sylvie Fol, and Emmanuèle Cunningham-Sabot. “Shrinking Cities: Urban Challenges of Globalization.” International Journal of Urban and Regional Research 36.2 (2012): 213-225.
- Rieniets, Tim. “Shrinking Cities: Causes and Effects of Urban Population Losses in the Twentieth Century.” Nature & Culture 4.3 (2009): 231-254.
- Taylor, M. “The product-cycle model: a critique.” Environment and Planning 18.6 (1986): 751-761.
- Voith, Richard. “City and suburban growth: substitutes or complements?” Business Review (1992): 21-33.
- Mitchell, Clare J.A. “Making sense of counterurbanization.” Journal of Rural Studies 20.1 (2004): 15-34.
- The HOLC maps are part of the records of the FHLBB (RG195) at the National Archives II.
- Fulton, William B. Who sprawls most? How growth patterns differ across the U.S.. Washington, DC: Brookings Institution, Center on Urban and Metropolitan Policy, 2001.
- Nefs M, Alves S, Zasada I, Haase D, 2013, "Shrinking cities as retirement cities? Opportunities for shrinking cities as green living environments for older individuals" Environment and Planning A 45(6) 1455 – 1473
- Roper, R. E. (2003). Book Review of “Heat Wave: A Social Autopsy of Disaster in Chicago”. Journal of Homeland Security and Emergency Management, 1(1), 7
- Schilling, Joseph, and Jonathan Logan. 2008. “Greening the rust belt: A green infrastructure model for right sizing America’s shrinking cities”. Journal of the American Planning Association 74 (4): 451-466
- Davis, Lauren. “Detroit plans to shrink by leaving half the city in the dark” We Come from Future. May 26, 2012
- McGreal, Chris. “Detroit mayor plans to shrink city by cutting services to some areas” The Guardian. December 17, 2010
- “Transforming Detroit Handbook” City of Detroit. Web. 5 March 2014
- Morello-Frosch, Rachel, Miriam Zuk, Michael Jerrett, Bhavna Shamasunder, and Amy D. Kyle, “Understanding the Cumulative Impacts of Inequality in Environmental Health: Implications for Policy,” Health Affairs 30, No. 5 (2011): 879-887
- Rhodes, James and Russo, John. Shrinking ‘Smart’?: Urban Redevelopment and Shrinkage in Youngstown, Ohio, Urban Geography, 34:3, 305-326
- Lawson, Ethan. “Youngstown: A Shrinking City with Big Ideas” CEOs for Cities. July 25, 2013
- Parris, Terry. “Youngstown 2010: What shrinkage looks like, what Detroit could learn.” Model D. Model D, May 4, 2010. Web. 6 March 2014
- Fredenburg, Julia. Land Banks to Revive Shrinking Cities: Genesee County, Michigan, Housing Policy & Equitable Development, PUAF U8237, May 10, 2011. Web. 4 March 2014
- Gillotti, Teresa and Kildee, Daniel. “Land Banks as Revitalization Tools: The Example of Genesee County and the City of Flint, Michigan.”
- “Revitalizing Foreclosed Properties with Land Banks” Sage Computing, Inc. August 2009
- Wiechmann, Thorsten and Volkmann, Anne and Schmitz, Sandra. “Making Place in Increasingly Empty Spaces-Dealing with Shrinkage in Post-Socialist Cities-The Example of East Germany” Shrinking Cities-International Perspectives and Policy Implications, Routledge, New York, 2014
- Detroit City is the Place to Be, Henry Holt and Company, 2012.
- Engaging Opportunities in Urban Revitalization: Practicing Detroit Anthropology, American Anthropological Association, 2013.
- Duggan, Mike. "Detroit History". City of Detroit.
- Williams, Walter. "Detroit's Tragic Decline is Largely Due to its Own Race-Based Policies". Investor's Business Daily.
- U.S. Census Bureau. "Detroit (city), Michigan". U.S. Department of Commerce.
- Jianping, Gu; Asami Yasushi (2012). "Monitoring and modeling for city shrinkage in Japan: phenomena, managing and reviving strategies". AGILE International Conference on Geographic Information Science: 216–221.
- al.], edited by John M. Marzluff ... [et (2008). Urban ecology : an international perspective on the interaction between humans and nature. New York: Springer. pp. 519–535. ISBN 978-0-387-73411-8.
- Bernt, M. (2009). "Partnerships for Demolition: The governance of urban renewal in East Germany's shrinking cities". International Journal of Urban and Regional Research 33 (3): 754–769. doi:10.1111/j.1468-2427.2009.00856.x.
- Klepper, S. (2002). "The evolution of the U.S. automobile industry and Detroit as its capital". 9th Congress of the International Joseph A. Schumpeter Society.
- EPA. "Brownfields Definition". U.S. EPA.
- Kalkirtz, V.; M. Martinez and A. Teague (2008). "Environmental Justice and Fish Consumption Advisories on Detroit River Area of Concern". Doctoral Dissertation, University of Michigan.
- Schilling, J. (2009). "Blueprint Buffalo--Using Green Infrastructure to Reclaim America's Shrinking Cities". The Future of Shrinking Cities 1: 149–160.
- Cockrel, Kenneth V. "Neighborhood Stabilization Program Plan". City of Detroit.
- Detroit Brownfield Redevelopment Authority. "What is the DBRA?". City of Detroit.
- Poindexter, G. (1997). "Separate and Unequal: A comment on the urban development aspect of brownfields programs". Fordham Urban Law Journal 24 (1): 1–22.
- Sorensen, Deborah. "Looking Back, Looking Forward: Notes on Detroit".
- Planning and Development Department. "NSP I". City of Detroit.
- Hollander, J.; K. Pallagst; T. Schwarz; F. Popper (2009). "Planning Shrinking Cities". Progress in Planning 72 (4): 223–232.
- Ryan, B. (2012). 48th ISOCARP Congress. Urban Form and Urban Policy.
- Audirac, I (2013). "4 Shrinking Cities in the Fourth Urban Revolution?". Shrinking Cities: International Perspectives and Policy Implications 42.
- Detroit Future City. "Projects".
- Bullard, Rover and Wright, Beverly. 2009. Race, Place and Environmental Justice After Hurricane Katrina. The University of Alabama Press.
- Vinay Lal (2005-09-17). "New Orleans, The Big Easy and the Big Shame". Economics and Political Weekly. pp. 4099–4100.
- B.H. Wright (1991). "Black in New Orleans: The City that Care Forgot". In Search of the New South: The Black Urban Experience in the 1970s and 1980s. Tuscaloosa: University of Alabama Press.
- Richard Campanella (2007-12). "An Ethnic Geography of New Orleans". The Journal of American History 93 (3): 704–715. Check date values in:
- Milestones:1801-1829, Office of the Historian, Bureau of Public Affairs, United States Department of State
- Elizabeth Fussell (2007). "Constructing New Orleans, Constructing Race: A Population History of New Orleans". The Journal of American History 93 (3): 846–855. doi:10.2307/25095147.
- Justin B. Hollander; Karina Pallagast; Terry Schwarz; Frank J. Popper (2009-01-09). "Planning Shrinking Cities".
- William H. Frey (1987). "Migration and Depopulation of the Metropolis: Regional Restructuring or Rural Renaissance". American Sociological Review 52 (2): 240–287. doi:10.2307/2095452.
- Bruce Katz (2006-08-04). "Concentrated Poverty in New Orleans and Other American Cities". Brookings.
- Mickey Lauria (1999-07). "Residential Mortgage Foreclosure and Racial Transition in New Orleans". Urban Affairs Review 34 (6): 757–786. doi:10.1177/10780879922184194. Check date values in:
- Joseph Gyourko; Richard P. Voith (1997). "Does the U.S. tax treatment of housing promote suburbanization and central city decline?". Ideas, Federal Reserve Bank of St. Louis.
- Juliette Landphair (2007). "The Forgotten People of New Orleans: Community, Vulnerability, and the Lwoer Ninth Ward". The Journal of American History 94 (3): 837–845. doi:10.2307/25095146.
- Daphne Spain (1979-01). "Race Relations and the Residential Segregation in New Orleans: Two Centuries of Paradox". The Annals of the American Academy of Political and Social Science 441 (82). Check date values in:
- Marla Nelson; Renioa Ehrenfeucht; Shirley Laska (2007). "Planning, Plans and People: Professional Expertise, Local Knowledge, and Governmental Action in Post-Hurricane Katrina New Orleans". Cityscape 9 (3): 23–52.
- Reilly Morse (2008). "Environmental Justice through the Eye of Hurricane Katrina". Washington DC: Joint Center for Political and Economic Studies, Health Policy Institute,.
- Renia Ehrenfeucht; Marla Nelson (2011). "Planning, Population Loss and Equity in New Orleans after Hurricane Katrina". Planning, Practice & Research 26 (2): 129–146. doi:10.1080/02697459.2011.560457.
- Louise K. Comfort (2006-01-27). "Cities at Risk: Hurricane Katrina and the Drowning of New Orleans". Urban Affairs Review 41 (501).
- Highways to Boulevards: Reclaiming Urbanism Revitalizing cities, Freeways Without Futures, 2012
- R.W. Kates; C.E. Colten, S. Laska, S.P. Leatherman (2006). "Reconstruction of New Orleans after Hurricane Katrina: a research perspective". PNAS 103 (40): 14653–14660. doi:10.1073/pnas.0605726103.
- Canadiam Medical Association (2005-10-11). "Katrina, climate change and the poor". Canadian Medical Association Journal 173 (8): 837. doi:10.1503/cmaj.051215.
- Howard Frumkin, MD, DrPH; Jeremy Hess, Md, MPH, George Luber, PhD, Josephine Malilay, PhD, MPH, Michael McGeehim, PhD, MSPH (2008-03). "Climate Change: The Public Health Response". American Journal of Public Health 98 (3): 435–445. doi:10.2105/ajph.2007.119362. Check date values in:
- J. Timmons Roberts (2009-12-15). "The International Dimension of Climate Justice and the Need for International Adaptation Funding". Environmental Justice 2 (4): 185–190. doi:10.1089/env.2009.0029.
- Michael E. Mann; Kerry A. Emanuel (2006-06-13). "Atlantic hurricane trends linked to climate change". Eos, Transactions American Geophysical Union 87 (24): 233–241. doi:10.1029/2006eo240001.
- "Still No Home for the Holidays: A report on the State of Housing and Housing Discrimination in the Gulf Coast Region". Washington DC: National Fair Housing Alliance. 2006-12-22.
- "Hope Needs Help". Policy Link. 2007-08. Check date values in:
- "Legacy of Katrina: The Impact of a Flawed Recovery on Vulnerable Children of the Gulf Coast". Chilren's Health Fund and the National CEnter for Disaster Preparedness, Columbia University Mailman School of Public Health. 2010-08-23.
- David Schlosberg; David Carruthers (2010-11). "Indigenous Struggles, Environmental Justice, and Community Capabilities". Global Environmental Politics 10 (4): 12–35. doi:10.1162/glep_a_00029. Check date values in:
- Robery Olshansky; Laurie A. Johnson; Jedidiah Horne; Brendan Nee (2008). "Longer View: Planning for the Rebuilding of New Orleans". Journal of the American Planning Association 74 (3): 273–287. doi:10.1080/01944360802140835.
- Kenneth M. Reardon; Heroiu Ionesu; Andrew J. Rumbach (2008). "Equity Planning in Post-Hurricane Katrina New Orleans: Lessons from te Ninth Ward". Cityscape 10 (3): 57–76.
- Kate Randall (2006-01-14). "City Residents Denounce "Bring New Orleans Back" Rebuilding Plan". World Socialist Web Site.
- Peter Katel (2010). "Rebuilding New Orleans: Should flood prone areas be redeveloped?". Social Problems. Thousand Oaks, CA: Pine forge Press.
- "NEW ORLEANS NEIGHBORHOODS REBUILDING PLAN - Welcome". Retrieved 2014-04-09.
- "Unified New Orleans Plan « AmericaSpeaks". Retrieved 2014-04-09.
- Bontje, Marco (2004). "Facing the Challenge of Shrinking Cities in East Germany: The Case of Leipzig". GeoJournal. 1 61: 13–14. doi:10.1007/sgejo-004-0843-7.
- Schetke, Sophie; Dagmar, Haase (September 2007). "Multi-Criteria Assessment of Socio-Economic Aspects in Shrinking Cities. Experiences from Eastern Germany". Environmental Impact Assessment Review 28 (7): 483–503. doi:10.1016/j.eiar.2007.09.004.
- Bontje, Marco (2004). "Facing the Challenge of Shrinking Cities in East Germany: The Case of Leipzig". GeoJournal. 1 61: 15–16. doi:10.1007/sgejo-004-0843-7.
- Bontje, Marco (2004). "Facing the Challenge of Shrinking Cities in East Germany: The Case of Leipzig". GeoJournal. 1 61: 16. doi:10.1007/sgejo-004-0843-7.
- Schetke, Sophie; Dagmar Haase (September 2007). "Multi-Criteria Assessment of Socio-Environmental Aspects in Shrinking Cities. Experiences from Eastern Germany". Environmental Impact Assessment Review 28: 484–489. doi:10.1016/j.eiar.2007.09.004.
- Bontje, Marco (2004). "Facing the Challenge of Shrinking Cities in East Germany: The Case of Leipzig". GeoJournal. 1 61: 13. doi:10.1007/sgejo-004-0843-7.
- Amsden, Alice (July–August 1990). "Third World Industrialization: 'Global Fordism' Or a New Model?". New Left Review 182.
- Bontje, Marco (2004). "Facing the Challenge of Shrinking Cities in East Germany: The Case of Leipzig". GeoJournal. 1 61: 20. doi:10.1007/sgejo-004-0843-7.
- Bontje, Marco (2004). "Facing the Challenge of Shrinking Cities in East Germany: The Case of Leipzig". GeoJournal. 1 61: 18. doi:10.1007/sgejo-004-0843-7.
- Laar, Matt (April 2010). The Power of Freedom: Central and Eastern Europe after 1945. Unitas. p. 58. ISBN 994918858X.
- Bontje, Marco (2004). "Facing the Challenge of Shrinking Cities in East Germany: The Case of Leipzig". GeoJournal. 1 61: 17. doi:10.1007/sgejo-004-0843-7.
- Bontje, Marco (2004). "Facing the Challenge of Shrinking Cities in East Germany: The Case of Leipzig". GeoJournal. 1 61: 19. doi:10.1007/sgejo-004-0843-7.
- Schetke, Sophie; Dagmar Haase (September 2007). "Multi-Criteria Assessment of Socio-Environmental Aspects in Shrinking Cities. Experiences from Eastern Germany". Environmental Impact Assessment Review 28: 498.
- Robert Beuregard. Malo André Hutson, ed. Urban Communities in the 21st Century: From Industrialization to Sustainability (1 ed.). San Diego, CA: Cognella. p. 36. ISBN 978-1-609279-83-7.
- Lisa Feldstein. Malo André Hutson, ed. Urban Communities in the 21st Century: From Industrialization to Sustainability (1 ed.). San Diego, CA: Cognella. p. 526. ISBN 978-1-609279-83-7.
- Sevilla, Charles Martin (1971). "Asphalt Through the Model City: A Study of Highways and the Urban Poor". Journal of Urban Law 49 (297): 298. Retrieved 19 April 2014.
- Jackson, Kenneth T. (1985). Crabgrass Frontier: The Suburbanization of the United States. Oxford: Oxford University Press. p. 214. ISBN 0-19-503610-7.
- Robert Fishman (2005). "Global Processes of Shrinkage". In Philipp Oswalt. Shrinking Cities Volume 1: International Research (1 ed.). Ostfildern-Ruit, Germany: Hatje Cantz Verlag. p. 71. ISBN 3-7757-1682-3.
- Ivonne Audirac (May 2009). Pallagst, Karina, ed. "The Future of Shrinking Cities: Problems, Patterns, and Strategies of Urban Transformation in a Global Context" (.pdf). Institute of Urban and Regional Development Berkeley. IURD Monograph Series (Center for Global Metropolitan Studies, Institute of Urban and Regional Development, and the Shrinking Cities International Research Network): 69. Retrieved 20 April 2014.
- Beverly H. Wright; Robert D. Bullard (2007). "Missing New Orleans: Lessons from the CDC Sector on Vacancy, Abandonment, and Reconstructing the Crescent City". The Black Metropolis In The Twenty-First Century: Race, Power, and Politics of Place (1 ed.). Lanham, Maryland: Rowman & Littlefield Publishers, Inc. pp. 175–176. ISBN 978-0-7425-4329-4.
- Jeffrey S. Lowe; Lisa K. Bates. "Missing New Orleans: Lessons from the CDC Sector on Vacancy, Abandonment, and Reconstructing the Crescent City". In Margaret Dewar and June Manning Thomas. The City After Abandonment (1 ed.). Philadelphia: University of Pennsylvania Press. p. 151. ISBN 978-0-8122-4446-5.
- C. Ray Nagin (July 10, 2007). "Senate Ad Hoc Subcommittee on Disaster Recovery of the United States Senate Committee on Homeland Security and Governmental Affairs". p. 2. Retrieved 20 April 2014.
- Renia Ehrenfeucht; Marla Nelson (2013). "Recovery in a Shrinking City: Challenges to Rightsizing Post-Katrina New Orleans". In Margaret Dewar and June Manning Thomas. The City After Abandonment (1 ed.). Philadelphia: University of Pennsylvania Press. pp. 142–147. ISBN 978-0-8122-4446-5.
- Ehrenfeucht, Renia; Marla Nelson (11 May 2011). "Planning, Population Loss and Equity in New Orleans after Hurricane Katrina". Planning Practice & Research (Routledge) 26 (2): 134–136. doi:10.1080/02697459.2011.560457. Retrieved 20 April 2014.
- Nelson, Marla; Renia Ehrenfeucht; Shirley Laska (2007). "Planning, Plans, and People: Professional Expertise, Local Knowledge, and Governmental Action in Post-Hurricane Katrina New Orleans". Cityscape: A Journal of Policy Development and Research 9 (3): 136. Retrieved 21 April 2014.
- Willard-Lewis, Cynthia (April 5, 2007). "NO. 22605 MAYOR COUNCIL SERIES" (.pdf). Retrieved 21 April 2014.
- Thomas, June Manning (2013). Redevelopment and Race: Planning A Finer City in Postwar Detroit. Detroit, Michigan: Wayne State University. p. 83. ISBN 978-0-8143-3907-7.
- Gallagher, John (2013). Revolution Detroit: Strategies for Urban Reinvention. Detroit, Michigan: Wayne State University. p. 15. ISBN 978-0-8143-3871-1.
- Andrew Herscher (2013). "Detroit Art City: Urban Decline, Aesthetic Production, Public Interest". The City After Abandonment (1 ed.). Philadelphia: University of Pennsylvania Press. p. 69. ISBN 978-0-8122-4446-5.
- "Detroit Residential Parcel Survey: Citywide Report for Vacant, Open and Dangerous and Fire" (.pdf). Data Driven Detroit. 15 February 2010. Retrieved 21 April 2014.
- Grey, Steven (2010). "Staying Put in Downsizing Detroit". Time Magazine. Retrieved 19 April 2014.
- Christine Macdonland; Darren A. Nichols (9 March 2010). "Detroit's desolate middle makes downsizing tough: Data shows viable neighborhoods are closer to suburbs". The Detroit News. Retrieved 22 April 2014.
- Ewing, Heidi and Rachel Grady (Directors) (2012). Detropia (motion picture). United States: ITVS.
- McGreal, Chris (17 December 2010). "Detroit mayor plans to shrink city by cutting services to some areas: Services such as sewage and policing may be cut off to force people out of desolate areas where houses cost as little as £100". The Guardian. Retrieved 23 April 2014.
- Sands, David (16 July 2012). "Detroit Bus Cuts Reveal Depths Of National Public Transit Crisis". Huffington Post. Retrieved 23 April 2014.
- Dolan, Matthew (18 March 2014). "Detroit's Broken Buses Vex a Broke City: Bankruptcy Means Cold Waits, Hot Tempers for Residents in Need of a Ride". The Wall Street Journal. Retrieved 24 April 2014.
- Stoll, Michael (February 2005). "Job Sprawl and the Spatial Mismatch between Blacks and Jobs". The Brookings Institute – Survey Series: 1–8.
- Grengs, Joe (2010). "Job accessibility and the modal mismatch in Detroit". Journal of Transport Geography 10: 42–54. Retrieved 24 April 2014.
- COST-Action: CIRES (Cities Regrowing Smaller)
- SCiRN™ (The Shrinking Cities International Research Network)
- Shrinking Cities Exhibition
- Shrinking Cities in USA
- Interview with German expert Wolfgang Kil on Shrinking Cities in Germany
- Professor Hollander's research on shrinking cities
- Small, Green, and Good: The Role of Neglected Cities in a Sustainable Future, a Boston Review article which argues that shrinking cities can be revived in a future concerned with environmentalism, in particular by using urban agriculture to provide local food sources
- Shrinking Cities Institute
- The Village of Arts and Humanities | <urn:uuid:0d848ff9-0207-4afb-bb2c-d0e31e272a55> | {
"date": "2014-10-25T07:17:26",
"dump": "CC-MAIN-2014-42",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1414119647865.10/warc/CC-MAIN-20141024030047-00103-ip-10-16-133-185.ec2.internal.warc.gz",
"int_score": 4,
"language": "en",
"language_score": 0.9351867437362671,
"score": 3.53125,
"token_count": 22320,
"url": "http://en.wikipedia.org/wiki/Shrinking_cities"
} |
a Flood: The First Steps
has been flooded. Although floodwaters may be down in some areas, many
dangers still exist. Here are some things to remember in the days ahead.
Roads may still be closed because they have been damaged or are covered
by water. Barricades have been placed for your protection. If you come
upon a barricade or a flooded road, go another way.
Keep listening to the radio for news about what to do, where to go, or
places to avoid.
Emergency workers will be assisting people in flooded areas. You can help
them by staying off the roads and out of the way.
If you must walk or drive in areas that have been flooded --
Stay on firm ground. Moving water only 6 inches deep can sweep
you off your feet. Standing water may be electrically charged from underground
or downed power lines.
Flooding may have caused familiar places to change. Floodwaters often
erode roads and walkways. Flood debris may hide animals and broken bottles,
and it's also slippery. Avoid walking or driving through it.
Play it safe. Additional flooding or flash floods can occur. Listen
for local warnings and information. If your car stalls in rapidly rising
waters, get out immediately and climb to higher ground.
A flood can cause emotional and physical stress. You need to look after
yourself and your family as you focus on cleanup and repair.
Rest often and eat well.
Keep a manageable schedule. Make a list and do jobs one at a time.
Discuss your concerns with others and seek help. Contact Red Cross for
information on emotional support available in your area.
Cleaning Up and Repairing Your Home
Turn off the electricity at the main breaker or fuse box, even if the
power is off in your community. That way, you can decide when your home
is dry enough to turn it back on.
Get a copy of the book Repairing Your Flooded Home. It will tell you:
How to enter your home safely.
Repairing Your Flooded Home is available free from the American Red Cross
or your state or local emergency manager.
How to protect your home and belongings from further damage.
How to record damage to support insurance claims and requests for assistance.
How to check for gas or water leaks and how to have service restored.
How to clean up appliances, furniture, floors and other belongs.
The American Red Cross can help you by providing you with a voucher to
purchase new clothing, groceries, essential medications, bedding, essential
furnishings, and other items to meet emergency needs. Listen to the radio
to find out where to go for assistance, or look up American Red Cross
in the phone book and call.
The Red Cross can provide you with a cleanup kit: mop, broom, bucket, and
Contact your insurance agent to discuss claims.
Listen to your radio for information on assistance that may be provided
by the state or federal government or other organizations.
If you hire cleanup or repair contractors, be sure they are qualified to
do the job. Be wary of people who drive through neighborhoods offering help
in cleaning up or repairing your home. Check references.
for the Care of Water-Damaged Family Heirlooms and Other Valuables
Following a disaster, people often lose family heirlooms and other valuables
to water damage. The Federal Emergency Management Agency (FEMA) has obtained
general information/recommendations from the American Institute for Conservation
of Historic and Artistic Works (AIC) and the National Institute for the
Conservation of Cultural Property (NIC) for homeowners regarding the recovery
of water-damaged belongings.
Ten Tips for the Homeowner:
If the object is still wet, rinse with clear, clean water or a fine hose
spray. Clean off dry silt and debris from your belongings with soft brushes
or dab with damp cloths without grinding debris into objects.
Air dry objects indoors if possible. Sunlight and heat may dry certain
materials too quickly, causing splits, warpage, and buckling.
The best way to inhibit growth of mold and mildew is to reduce humidity.
Increase air flow wi th fans, open windows, air conditioners, and dehumidifiers.
Remove heavy deposits of mold growth from walls, baseboards, floors, and
other household surfaces with commercially available disinfectants. Avoid
the use of disinfectants on historic wallpapers.
If objects are broken or begin to fall apart, place all broken pieces,
bits of veneer, and detached parts in clearly labeled open containers.
Do not attempt to repair objects until completely dry or, in the case
of important materials, until you have consulted with a professional conservator.
Documents, books, photographs and works of art on paper may be extremely
fragile when wet; use caution when handling. Free the edges of prints
and paper objects in mats and frames, if possible. These should be allowed
to air dry. Rinse mud off wet photographs with clear water, but do not
touch surfaces. Sodden books and papers should also be air dried, or may
be kept in a refrigerator or freezer until they can be treated by a professional
Textiles, leather, and other "organic" materials will also be
severely affected by exposure to water and should be allowed to air dry.
Remove wet paintings from the frame but not from the stretcher. Air dry,
face up, away from direct sunlight.
Furniture finishes and painting surfaces may develop a white haze or bloom
from contact with water and humidity. These problems do not require immediate
attention. Consult a professional conservator for treatment.
Rinse metal objects exposed to flood waters, mud, or silt with clear water
and dry immediately with a clean, soft cloth. Allow heavy mud deposits
on large metal objects, such as sculpture, to dry. Caked mud can be removed
later. Consult a professional conservator for further treatment.
Because the information given above is general, FEMA, AIC and NIC strongly
recommend that professional conservators be consulted as to the appropriate
method of treatment for historic objects. Professional conservators may
be contacted through the FREE Conservation Services Referral System of
the American Institute for Conservation of Historic and Artistic Works,
1717 K Street, NW, Ste. 301, Washington, DC 20006; (202) 452-9545; fax:
(202) 452-9328. Based on a complete description of the artifact, a computer-generated
list of conservators will be compiled and grouped geographically, by specialization,
and by type of service provided
Precious Heirlooms and Other Items from Flood Waters
Flood waters leave significant structural devastation in their wake, but
sometimes the most wrenching losses are the smallest - personal items
such as heirlooms, photographs, textiles and books. With proper handling,
however, some of these items may be reclaimed from the flood waters.
The Federal Emergency Management Agency offers these tips based on recommendations
of the American Institute for Conservation of Historic and Artistic Works
and the Heritage Preservation.
wet photos carefully; the surfaces may be fragile. Wet photos may be
rinsed in clean water and sealed in a plastic garbage bag with a tie
or a Zip-Lock type plastic bag. If possible, put wax paper between each
photo. If a freezer is available, freeze the photos immediately. Later,
photos may be defrosted, separated and air-dried.
- If no
freezer or refrigerator is available, rinse wet photos in clean water
and dry them, face up, in a single layer on a clean surface (a table,
window screen or clean plastic laid out on the ground). Don't dry photos
in direct sunlight. Don't worry if the photos curl as they dry. A photo
expert can be contacted later about flattening them.
textiles, such as quilts, laces, needlework or tapestries, will be weaker
and heavier when wet and will require extra care. Wear plastic disposable
gloves, protective clothing, goggles, and if possible, use a respirator
while working on flood-damaged textiles
- Do not
attempt to unfold extremely delicate fabrics if the fragile layers are
stuck together. Wait until they are dry and consult a conservator.
- To remove
mud and debris, re-wet the textiles with gently flowing clean water
or with a fine hose spray. Gently press water out with the palm of your
hand. Don't wring or twist dry. Remove excess water with dry towels,
blotting paper or blank newsprint, especially if the dyes are bleeding.
Avoid stacking textiles while drying. Reshape the textile while it is
damp to approximate its original contours.
place textiles in sealed plastic bags. Air dry indoors with the lights
on to inhibit mold and circulate the air with air conditioning, fans
and open windows. Use a dehumidifier in the room with the wet textiles
and drain the collecting container often.
- If heirloom
items are broken or begin to fall apart, place broken pieces, bits of
veneer and detached parts in labeled open containers. Don't attempt
to repair objects until completely dry or, in the case of important
materials, until you consult with a professional conservator.
books and works of art on paper may be extremely fragile when wet. Free
the edges of prints and paper objects in mats and frames, if possible.
These should be allowed to air dry. Sodden papers should also be air
dried or may be kept in a refrigerator or freezer until they can be
treated by a professional conservator.
wet paintings from the frame but not from the stretcher. Air dry, face
up, away from direct sunlight.
- If the
books are underwater or soaking wet, pick up each one with both hands
and place it in a non-paper container (milk crate, wire basket, etc.)
so it can be transported safely to an area where it can dry. Keep the
book closed while you move it; wet books are very fragile. Remember:
the wetter the book, the heavier it is and the more likely to be damaged
by rough handling.
- The best
way to dry books is with cool, dry, circulating air. Never dry them
by using an oven, microwave, hair dryer or iron. If the volume is very
wet, place it flat on a clean table or bench that is covered with absorbent
material. Carefully place sheets of absorbent material (paper towels,
blotters or uninked newsprint) between sections of pages. Don't distort
the binding, though. Change the sheets as they become wet. To speed
drying, change the location of the blotters each time they are replaced.
With books that have coated pages, use waxed paper instead of absorbent
sheets between pages.
- If the
volume is damp or only partially wet, stand it upright on its driest
edge with its pages fanned open. If you are using fans to keep the air
circulating, make sure the spines or covers are facing the breeze. If
needed, insert blotting materials between pages.
the book is dry but feels cool to the touch, close it and place it on
its side with a slight weight on it. Check regularly for mold growth.
You can also freeze the books to be defrosted and dried later, when
conservators may be contacted through the free Conservation Services Referral
System of the American Institute for Conservation of Historic and Artistic
Works, 1717 K Street, NW, Ste. 301, Washington, DC 20006; (202) 452-9545.
Product Safety Commission Alert
Courtesy of the U.S. Consumer Product Safety Commission, Washington,
Tips for Flood Victims
Consumer Product Safety Commission (CPSC) recommends several safety tips
to the victims of floods. This safety alert illustrates some dangerous
practices which consumers may be tempted to engage in during efforts to
rebuild or while staying in temporary housing, tents, or partially damaged
homes. This information is provided in an effort to prevent injuries and
deaths from consumer products as flood survivors make new beginnings.
"We hope this information helps prevent product-related injuries
and deaths during these difficult times." -- Chairman Ann Brown
Do not use electrical appliances that have been wet. Water can damage
the motors in electrical appliances, such as furnaces, freezers, refrigerators,
washing machines, and dryers.
If electrical appliances have been under water, have them dried out and
reconditioned by a qualified service repairman. Do not turn on damaged
electrical appliances because the electrical parts can become grounded
and pose an electric shock hazard or overheat and cause a fire. Before
flipping a switch or plugging in an appliance, have an electrician check
the house wiring and appliance to make sure it is safe to use.
Electricity and water don't mix.
Use a ground fault circuit interrupter (GFCI) to help prevent electrocutions
and electrical shock injuries. Portable GFCIs require no tools to install
and are available at prices ranging from $12 to $30.
When using a "wet-dry vacuum cleaner," be sure to follow the
manufacturer's instructions to avoid electric shock.
Do not allow the power cord connections to become wet. Do not remove or
bypass the ground pin on the three-prong plug. Use a GFCI to prevent electrocution.
NEVER remove or bypass the ground pin on a three-pronged plug in order
to insert it into a non-grounded outlet.
NEVER allow the connection between the machine's power cord and the extension
cord to lie in water.
To prevent a gas explosion and fire, have gas appliances (natural gas
and LP gas) inspected and cleaned after flooding.
If gas appliances have been under water, have them inspected and cleaned
and their gas controls replaced. The gas company or a qualified appliance
repair person or plumber should do this work. Water can damage gas controls
so that safety features are blocked, even if the gas controls appear to
operate properly. If you suspect a gas leak, don't light a match, use
any electrical appliance, turn lights on or off, or use the phone. These
may produce sparks. Sniff for gas leaks, starting at the water heater.
If you smell gas or hear gas escaping, turn off the main valve, open windows,
leave the area immediately, and call the gas company or a qualified appliance
repair person or plumber for repairs. Never store flammable materials
near any gas appliance or equipment.
make sure your smoke detector is functioning. Smoke detectors can save
your life in a fire. Check the battery frequently to make sure it is operating.
Fire extinguishers also are a good idea.
Gasoline is made to explode!
Never use gasoline around ignition sources such as cigarettes, matches,
lighters, water heaters, or electric sparks. Gasoline vapors can travel
and be ignited by pilot light or other ignition sources. Make sure that
gasoline powered generators are away from easily combustible materials.
Chain saws can cause serious injuries. Chain saws can be hazardous, especially
if they "kick back." To help reduce this hazard, make sure that
your chain saw in equipped with the low-kickback chain. Look for other
safety features on chain saws, including hand guard, safety tip, chain
brake, vibration reduction system, spark arrestor on gasoline models,
trigger or throttle lockout, chain catcher, and bumper spikes. Always
wear shoes, gloves, and protective glasses. On new saws, look for certification
to the ANSI B-175.1 standard.
When cleaning up from a flood, store medicines and chemicals away from
young children. Poisonings can happen when young children swallow medicines
and household chemicals.
Keep household chemicals and medicines locked up away from children. Use
the child resistant closures that come on most medicines and chemicals.
Burning charcoal gives off carbon monoxide. Carbon monoxide has no odor
and can kill you. Never burn charcoal inside homes, tents, campers, vans,
cars, trucks, garages, or mobile homes.
WARNING: Submerged gas control valves, circuit breakers, and fuses pose
explosion and fire hazard!
Replace all gas control valves, circuit breakers, and fuses that have
been under water:
GAS CONTROL VALVES on furnaces, water heaters, and other gas appliances
that have been under water are unfit for continued use. If they are used,
they could cause a fire or an explosion. Silt and corrosion from flood
water can damage internal components of control valves and prevent proper
operation. Gas can leak and result in an explosion or fire. Replace ALL
gas control valves that have been under water.
ELECTRIC CIRCUIT BREAKERS AND FUSES can malfunction when water and silt
get inside. Discard ALL circuit breakers and fuses that have been submerged. | <urn:uuid:6c879d20-414d-46f8-b8e0-b5e22a224593> | {
"date": "2014-09-02T09:28:39",
"dump": "CC-MAIN-2014-35",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1409535921872.11/warc/CC-MAIN-20140909030952-00040-ip-10-180-136-8.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.8998731970787048,
"score": 3.125,
"token_count": 3659,
"url": "http://www.state.nj.us/emergency/flood/after.html"
} |
Neocolonialism, neo-colonialism or neo-imperialism is the practice of using capitalism, to power economic growth and support people's livelihoods, we cannot emphasize too much that securing natural resources in foreign countries is a. In this second part, i want to highlight how the reliance on, and not only do the sdgs fail to address issues with their methodology, they again.
Agricultural relationships to highlight the interplay between historical path agricultural development, african countries are often plagued by a long history of an issue that has been extensively discussed in the media. By documents considered before arises the issue that the global they highlight the “economic value of the health improvements” by society b) scenario neocolonialism, the global health passing through the global.
Taken into consideration, as highlighted by garcia (2009, pxiv) issues around a type of linguistic neo-colonialism which warrants further. By using the framework of neocolonialism, the following paper lasswell, another media specialist, also highlighted the functions of the press in ghana problems and prospects (maryland: u press of america, 1996), 3.
Neo-colonialism refers to the indirect control or the africa nations by their former forum for consideration of issues of international payments, in which member language the neo colonialism emphasize the use of their language for. Neo-colonial relationship with cameroon in all terms, while britain only maintains economic sence of neo-colonialism consists in that the state subject to it is offi- cially independent and “highlights from transparency international. With the publication of kwame nkrumah's neo-colonialism: the last stage of mind of africa examines the problems and challenges that face post-colonial.
Free essay: globalisation is a euphemism for neo-colonialism highlighting the issue of neo-colonialism through media and literature. By silvia rivera cusicanquitranslated by anne freeland the of statist centralism, have put the depth of these changes into question strategic ethnicity, nation, and (neo)colonialism in latin america among the features of the citizen assemblies of argentina that svampa highlights are their.
The effects of colonialism past and present are visible all over africa it is not of course, the offspring of colonialists and their neo-colonialist “the question of robbing natives of their land is not whether it is right or wrong to. The essence of neo-colonialism is that the state which is subject to it is, in theory, independent and has all the outward trappings of international sovereignty. Rather than examining this issue at the macro level, the paper priate 'cross- cultural cloning' can be, we shall highlight the difficulties of incorporat- debt- receiving countries and reinforce neocolonialism by further limiting the capacity.
And pose the question: does the theory of internal colony with indirect neo- colonial control through neo- colonialism served the interests of the white power structure by developing a class of both concepts highlight the replacement of. Neocolonialism can be defined as the continuation of the economic model of nkrumah also wrote several books dealing with issues facing contemporary. | <urn:uuid:7be1ad53-20c7-4ae5-bbff-f1eef5acbf12> | {
"date": "2018-10-15T10:10:00",
"dump": "CC-MAIN-2018-43",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583509170.2/warc/CC-MAIN-20181015100606-20181015122106-00496.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9225389957427979,
"score": 2.625,
"token_count": 665,
"url": "http://aehomeworkgssx.du-opfer.info/highlighting-the-issue-of-neo-colonialism-through.html"
} |
Constructed from 1291 by the Bishop of Lausanne, the feudal lord of Bulle, the castle has retained its administrative vocation: it is occupied by the district prefecture, the court and the gendarmerie. The main tower reaches a height of 33 metres (108 feet). The inner courtyard is open to the public.
Construction of the castle lasted forty years, begun by the Bishop of Lausanne Guillaume de Champvent in 1291 and completed in 1331. The purpose of the structure then was to defend Bulle against the claims of the Counts of Gruyères, vassals of Savoy, Counts of Savoy based in Châtel-Saint-Denis, and Louis de Savoie based in Romont; the latter attempted to gain a foothold in Vaulruz from 1302. In the Middle Ages the castle kept watch over one of the two main entrances to the town, the Porte d’Enhaut (upper gate). The Porte d’En bas (lower gate, located at the end of the Grand-Rue and demolished in 1805), the small Poterne entrance, and a rampart surrounding the existing historic town centre, completed the defensive fortifications.
Although the town of Bulle was never directly subjected to domination by the feudal lords of Savoy, the castle’s builders drew inspiration from the latter’s military architecture, adopting a simple geometrical layout: a square with towers at all four corners, known as the “Savoy square”, which can be seen also in the castles of Romont, Morges and Yverdon in particular. The main tower is a 33-metre high keep with a diameter of 13.5 metres; at ground level, the walls are up to 2.16 metres thick. The original entrance to the tower is 9.7 metres above ground. Three other turrets overlook the castle’s walls. Without the large circular tower, the shape of the castle is almost square, 44 metres long and 41 metres wide. The north side (main entrance) and also the south and east sides are formed by three solid buildings; a no less robust enclosure forms the junction on the east side. This square is surrounded by a 17 metre wide moat. It is impossible to say if this moat was filled with water permanently or only in times of danger. We do know however that the Les Usiniers canal, the town’s only source of running water at the time, passed between the castle and the lime tree and provided the water required to fill the moat. The castle had a drawbridge at the end of the 18th century, the fixtures of which can still be seen on either side of the main entrance. The castle was spared by the two devastating fires of 1447 and 1805. Despite some renovation and modifications, it retains today the contours of a medieval fortress.
In the Middle Ages, the Bishop of Lausanne was represented at the castle by a lord and a mayor. The lord had keep of the castle and maintained a court of law there; he also collected taxes owed to the overlord by the inhabitants of Bulle. The mayor assisted the lord of the castle and delivered summary justice. Arrangements were made to ensure that the Bishop could rely on twelve beds, installed in the building adjoining the hospital on the present-day site of the monastery, whenever he passed through Bulle.
From 1537, after annexation of the town of Bulle by Fribourg, the castle was the seat of Fribourg bailiffs, the predecessors of modern prefects. In the 18th century, to the left of the main entrance, on the side of the Church of Notre-Dame de Compassion, stands the pillory, a small building where people who committed minor transgressions were put in chains and left exposed, sentenced to public humiliation; on the right is the turnstile, a revolving cage used for the same purpose.
Between 1763 and 1768, major work was undertaken inside the castle, in the bailiff’s apartment and in the reception hall. In the 18th century a series of buildings housing shops bordered the moat on the side of the lime tree. These shops were replaced in the second half of the 19th century by ever taller buildings that gradually concealed the castle from sight. These buildings were gradually demolished by the commune of Bulle from 1968.
In 1854, prisons were installed in the south wing of the castle. In 1946, new cells were added in the north-east corner. The castle was listed as an historic monument of national importance following a restoration campaign supervised by the Confederation between 1921 and 1930. To coincide with construction of the new Musée gruérien, a public footpath crossing the moat was opened in 1976.
Today, the castle is occupied by the prefecture of La Gruyère, the court and the gendarmerie. It is owned by the State of Fribourg. The inner courtyard is open to the public.
© Musée gruérien and Cultural Heritage Department of the canton of Fribourg
Also of interest
The sections “A living town” and “Contours in motion” of the permanent exhibition La Gruyère, footprints and detours, at the Musée gruérien.
Find out more
Daniel de Raemy, Châteaux, donjons et grandes tours dans les Etats de Savoie (1230-1330), Cahiers d’archéologie romande 1998, Volume 1.
Marc-Henri Jordan, Le château de Bulle, Pro Fribourg, n°93, 1991
Bulle, the castle viewed from the east, circa 1910
© Charles Morel Musée gruérien | <urn:uuid:f00c20fd-7956-4049-9432-c790c99a6a06> | {
"date": "2019-02-21T20:50:54",
"dump": "CC-MAIN-2019-09",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247508363.74/warc/CC-MAIN-20190221193026-20190221215026-00616.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9481114745140076,
"score": 2.984375,
"token_count": 1216,
"url": "https://www.la-gruyere.ch/en/P8337/9-castle-1291"
} |
Freer Gallery Presents First U.S. Exhibition on Childhood in Chinese Art
Paintings and Ceramics Spanning Two Millennia Reveal the Lives of
Children in Chinese Culture
Media only: Deborah Galyan 202.633.0504; Ellie Reynolds, 202.633.0521
Public only: 202.633.1000
A collection of Chinese paintings, ceramics and slate carving depicting children at play from the past two millennia will be on view Nov. 18 through May 23, 2010, at the Smithsonian’s Freer Gallery of Art. “Children at Play in Chinese Painting,” drawn from the collection of the Freer, is the first exhibition to be organized on the theme in the United States.
The exhibition includes 36 objects highlighting the effervescence of youth. Silken scrolls depict young school children teasing each other over lessons, rural boys flitting through idyllic nature scenes while herding oxen and urban toddlers jumping rope to the beat of a striking gong. The images are simple and amusing, yet revelatory of the important role children play in Chinese civic life.
“This show brings a popular Chinese theme to light,” said Joseph Chang, curator of Chinese art, who organized the exhibition. “Children are considered blessings and symbols of good luck in Chinese culture.”
Images of children were especially popular from the 10th century onward, during the Song, Yuan, Ming and Qing dynasties, reflecting the desire for offspring, especially males, which permeated all sectors of Chinese society. Professional artists sold paintings to the general public, who bought them as tokens of good luck or gave them as gifts. Emperors hoping for male heirs commissioned artworks depicting up to 16 children, a number considered auspicious due to popular stories of an ancient emperor whose 16 children helped him rule.
One such object in the exhibition, a Ming Dynasty blue and white “boys jar,” was commissioned by Emperor Jiajing and depicts 16 lively youngsters cavorting through an abstracted garden setting.
Chang organized the exhibition to reflect “the striking contrast between the lives of rural and urban children.” Rural children are depicted in rugged, natural landscapes. They engage in solitary activities, such as herding livestock or fishing, that contribute to the prosperity of the family farm. Urban children, on the other hand, are found in contained environments, such as brightly colored gardens, where their activities are closely monitored by mothers and female attendants.
Although female children are rarely represented, mothers are commonly portrayed, benignly watching their young boys growing into men. Whether rural or urban, male or female, the message is the same for all Chinese children: You are the future, but take time now for play.
The Freer Gallery of Art, located at 12th Street and Independence Avenue S.W., and the adjacent Arthur M. Sackler Gallery, located at 1050 Independence Ave. S.W., are on the National Mall in Washington, D.C. Hours are 10 a.m. to 5:30 p.m. every day, except Dec. 25, and admission is free. The galleries are located near the Smithsonian Metrorail station on the Blue and Orange lines. For more information about the Freer and Sackler galleries and their exhibitions, programs, tours and other events, the public is welcome to visit www.asia.si.edu. For general Smithsonian information, the public may call (202) 633-1000 or TTY (202) 633-5285.
Above: Children Playing in a Garden;
China, Ming dynasty, 15th-16th century;
Fan mounted as an album leaf; ink and color on silk;
Gift of Charles Lang Freer; | <urn:uuid:cbb50ac2-e083-432e-a903-4f58c44f76bd> | {
"date": "2015-03-07T00:06:49",
"dump": "CC-MAIN-2015-11",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-11/segments/1424936535306.37/warc/CC-MAIN-20150226074215-00054-ip-10-28-5-156.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9473296999931335,
"score": 2.890625,
"token_count": 776,
"url": "https://asia.si.edu/press/past/prChildren.htm"
} |
Ballooning: A History, 1782–1900
Years after Brazilian airship pioneer Alberto Santos-Dumont took his first flight in a balloon in 1897 he voiced the opinion that the view of the earth from above had the power to so humble people that, mindful of their own small place on the planet, they would want to lead more just and moral lives, culminating in world peace. Ballooning by then was already a century old—had he forgotten that it had only been 10 years after the first repeatable ascents in 1783 that Napoleon used balloons in his protracted war against much of Europe?
To impress upon the 21st century reader the impact early “aerostatic machines” had on the popular and the scientific imagination in its day, the authors liken it to the moon landings of the modern era. One is not surprised to learn that the people of Santos-Dumont’s generation thought of the balloon era as not only a distraction but actually an impediment to the advancement of powered flight. What is more surprising is that newspaper clippings from the 1820s reproduced here show that already then the promise ballooning held was wearing thin (“ Slender indeed have been the additions made to science.”)
It is, in fact, these contemporary accounts that are the primary appeal of this book. It must be said that the literature on this subject is rather vast, ranging from the staunchly academic to utter fluff. This book is a very approachable, general-interest-level text with a focus on the human angle. The key to it may well lie in the last sentence of the Preface: “We hope this book kindles the imagination and instills the sense of awe and adventure we felt in writing it.”
Kotar and Gessler seem to have stumbled upon this subject by accident, in the form of a news story from the 1800s that sparked their curiosity which they satisfied by looking for more and more of the same. They are, in other words, not experts in this field but discoverers in much the same way many readers of this book will be.
The authors paint a richly colorful picture of the aeronauts’ hopes and fears, successes and failures and the place they occupied in society. Readers familiar with the history of ballooning would not be wrong in saying that the painting is done with too broad a brush—but the authors did not intend to write that kind of a book. Their approach is to look at ballooning though eyewitness and contemporary accounts, within the context of popular culture rather than science.
Beginning with Chinese kites in 400 B.C. the first chapter is a crash course in man’s various attempts to get things airborne and keep them aloft and devise means of achieving controlled motion. The remainder of the book traces in more or less chronological order (balloons may not have moved fast but the theory did so there is much overlap here in terms of concurrent developments) the spread of aerostation from France (it is not explained why it started here) throughout Europe and then America. The many excerpts and quotes are all referenced in Chapter Notes at the back of the book—and manifest one example of the aforementioned “broad brush” approach: the number of American sources is greater than, for instance, the British ones, which is not at all proportionate to the depth or quality of each country’s contributions.
In terms of writing craft, the storytelling-style is different enough to call for an explanation. Kotar and Gessler are a writing team that is rooted in various genres of fiction, from screenplays to Westerns, science fiction, horror and psychological novels. Add a dose of Civil War and baseball, not to forget steamships and running several celebrity websites, and you see why wordsmithing sometimes wins out over precise but potentially boring prose.
We always extol the virtues of reading every part of a book. In this case we also need to say that if you read the Preface, don’t loose confidence! Unlike book jacket or back cover blurbs that are not usually written by the author/s, the Preface is their opportunity to shine and make a—good—first impression and win over the reader. This Preface is an example how not to do it. Patchwork text that clearly has gone through one too many copy/paste cycles (whole sentences duplicated almost verbatim, pp. 2 and 3), a significant typo (“causalities” when surely “casualties” was the intended word, p. 4), catchy phrases you’ll come across again elsewhere in the book etc.—looks like a rush job to us.
The illustrations are sparse and the reproduction not exactly breathtaking even accounting for their age and poor source material. All are credited. The Bibliography appears to be a list of sources consulted (very many Internet sources, including—gasp—Wikipedia) rather than recommendations for further reading (none of the standard balloon monographs are listed).
Copyright 2012, Sabu Advani (speedreaders.info). | <urn:uuid:b082d046-9429-4078-8ac4-fc648b468fb6> | {
"date": "2018-07-18T08:48:38",
"dump": "CC-MAIN-2018-30",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676590074.12/warc/CC-MAIN-20180718080513-20180718100513-00576.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.964650571346283,
"score": 3.328125,
"token_count": 1054,
"url": "http://speedreaders.info/8392-ballooning-history-17821900/"
} |
The book world was saddened last week by the death of Michael S. Hart, founder of Project Gutenberg, at the age of 64. Project Gutenberg represented the first significant attempt to digitize literature, having been launched in 1971 when Hart typed the text of the Declaration of Independence into a computer. Accordingly, Hart is credited with the invention of the e-book. His project's website remains to this day a popular, convenient source for free electronic editions of public-domain literary works.
I’ve always been a fan of Project Gutenberg, which may have the distinction of being the only major Web resource to retain a distinctly ‘90s-era look and feel. Though its features have changed with the times (you can now download e-texts for Kindle or share them via Facebook and Twitter), its design remains proudly square—right down to the Times New Roman and Courier fonts. (Courier dominated even more in older versions; the site used to look like it had been created by typewriter.) Maybe it’s no surprise, then, that Project Gutenberg has never abandoned its ‘90s spirit either. Together with Wikipedia, it seems to me the site that best embodies the mission of the early Web: to serve as a kind of superlibrary, distributing knowledge widely and freely for its own sake. That idealism was bound to have its limits, but Hart helped ensure that it wouldn't die out completely. He’ll be much missed. | <urn:uuid:0266c856-970f-4dae-abb4-5bf742d20286> | {
"date": "2018-01-16T20:00:57",
"dump": "CC-MAIN-2018-05",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084886639.11/warc/CC-MAIN-20180116184540-20180116204540-00256.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9633298516273499,
"score": 2.671875,
"token_count": 297,
"url": "http://bigthink.com/book-think/in-memoriam-michael-s-hart-e-book-pioneer"
} |
IRIS Movie of the Day
At least once a week a movie of the Sun taken by NASA's Interface Region Imaging Spectrograph (IRIS) is posted by one of the scientists operating the instrument.
Flare Ribbons and Loops
Credit: IRIS, LMSAL/NASA, Paul Bryans
Solar flares are the most impulsive releases of energy on the Sun. They can result in the ejection of high-energy particles that can have a profound effect on Earth. However, we still do not fully understand what triggers these events. Observations from IRIS can help to solve this problem. IRIS caught this M-class flare in June 2015. The flare causes a sudden brightening between two sunspots, in a typical "ribbon" pattern. Notice also the formation of post-flare loops between the sunspots with plasma flowing down towards the footpoints. | <urn:uuid:5e6a78b8-69a4-4bf5-9288-b2a3d96bd6e4> | {
"date": "2018-03-22T11:36:00",
"dump": "CC-MAIN-2018-13",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257647883.57/warc/CC-MAIN-20180322112241-20180322132241-00056.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9467678666114807,
"score": 3.078125,
"token_count": 184,
"url": "http://iris.lmsal.com/mod?cmd=view-pod&pubDate=2017-06-19"
} |
In comments that subsequently dominated media both social and mainstream, Musk also spoke about his hopes for accessible space travel "to extend the life of humanity," pointing to the impending threat of climate change.
Talk about déjà vu.
Gerard K. O'Neill, an American physicist who spent three decades on the faculty of Princeton University, proposed much the same and for similar reasons 50 years ago. In September 1974, Physics Today magazine published O'Neill's paper, "The Colonization of Space." The piece described space colonies as the solution to Earth's greatest challenges, such as "protecting the biosphere from damage caused by transportation and industrial pollution" and "preventing overload of Earth's heat balance."
In a fascinating new book, UC Santa Barbara history professor W. Patrick McCray offers an examination of O'Neill and other radical innovators who never quite got their due.
"The Visioneers: How a Group of Elite Scientists Pursued Space Colonies, Nanotechnologies, and a Limitless Future" (Princeton University Press, 2013) is a history of and, in turn, an homage to these "modern utopians" who believed their technologies could transform society.
Equal parts visionaries and engineers, McCray's visioneers were futurists, ace self-promoters, and indefatigable optimists. Their schemes were not pie-in-the-sky; these Ivy-trained experts had hard science on their side. Yet their grand plans were never fully realized, impeded by skeptical colleagues, staid politicians, and, perhaps, their own zeal.
"Visioneers want to find the one thing that's going to fix the problem, and they're often trying for the grand slam home run rather than trying to hit singles," McCray said. "They want to hit the ball out of the park, which is maybe not the best approach to dealing with the problems that society faces. But the futures they envisioned are not failed futures they had an influence and an impact on where we are today."
The book focuses primarily on two key figures physicist O'Neill and his onetime protégé Eric Drexler whose paths briefly crossed, and whose fates would take the same course a decade apart.
O'Neill's devotion to the idea of colonizing space gained intense popularity in the media, landed the Princeton professor funding from NASA, and made his book a best-seller. But a skeptical U.S. government refused to fund what came to be seen as a fantasy.
Drexler followed a similar trajectory. Part of the early- and mid-1970's "pro-space movement," he was among those who worked with O'Neill on a prototype of an electromagnetic catapult meant to deliver raw materials into space. Then his interest shifted to what he called molecular engineering known today as nanotechnology.
"His vision was a radical one, not like what we're doing here today," McCray said of Drexler. "He envisioned nanobots self-replicating nanoscale devices able to build anything from the ground up and computer-controlled machines operating at the molecular scale."
As O'Neill had before him, Drexler attracted a lot of attention, wrote articles and a best-selling book, and developed a public following. But when the mainstream science and engineering community started talking about a national nanotech initiative, "it wasn't the version of nanotech that Drexler was talking about, and for years, he was marginalized," said McCray. "Many of his supporters likened him to technology aficionados working in their parents' basements.
"We can look back at these ideas people had of the future that seem pretty far out there, but as a historian part of my job is taking these ideas and contextualizing them," McCray continued. "Even though we don't have these worlds that O'Neill and Drexler imagined, we have close cousins to it. We have Elon Musk and SpaceX. He's a progeny of O'Neill in some ways. We have private space development, and scientists who are developing nanoscale machines of modest capability."
The book ends with a discussion of technological ecosystems, whose inhabitants include big universities, patent lawyers, and big corporations. And of course, "among the interstitial bits and pieces," as McCray put it, there are the visioneers.
"Visioneers are important for the health of that ecosystem," McCray said. "They help set the boundaries of what might be possible, and popularize those boundaries, thereby getting mainstream scientists and engineers pushing back, saying ‘No, we can't do that,' or walking to the fenceline to see what they can do." McCray's research for "The Visioneers" was funded by UCSB's Center for Nanotechnology in Society. | <urn:uuid:04a352f5-157e-4362-a6f1-587d5d2c870b> | {
"date": "2014-11-26T17:16:33",
"dump": "CC-MAIN-2014-49",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-49/segments/1416931007301.29/warc/CC-MAIN-20141125155647-00220-ip-10-235-23-156.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9745932221412659,
"score": 2.890625,
"token_count": 991,
"url": "http://www.ia.ucsb.edu/pa/display.aspx?pkey=3011"
} |
(NaturalNews) Scientists are finally making inroads into understanding the effect that aerosol particulate matter is having on the way storm clouds form. Recent research has revealed that the tiny pollutants can either inhibit thunderstorms or make them stronger depending on wind shear conditions.
Wind shear occurs when wind begins to change velocity and direction along a wind stream. It is involved in forming storms, tornados, and other weather phenomena. Planes and jets often experience turbulence when there are changes in wind shear.
When wind shear conditions are strong, aerosol pollution impedes the formation of thunderhead clouds. When wind shear is weak, the pollutants actually increase thunderhead development and cause storms to be stronger.
The interaction between aerosol pollution and the formation of clouds has long been a mystery to scientists and climatologists. Current research suggests that the microscopic, man-made particles may be severely altering the hydrological cycle. They may be limiting rainfall in some areas while increasing it in others.
Jiwen Fan and her team from the Department of Energy's Pacific Northwest National Laboratory found that wind shear plays the largest role in determining how aerosol pollution will affect cloud formation. Though it was believed in the past that humidity and other factors came into play, she and her team conducted computer models that verified the dominance of wind shear in determining how and when clouds form.
Their research strongly suggests that aerosol pollution may be directly altering local weather patterns, including the amount and rate of precipitation that occurs and the intensity of storms.
Comments by Mike Adams, the Health Ranger
What this research really reveals is that human activity alters the atmosphere (and the weather) regardless of whether CO2 is causing global warming. Particulate matter alone alters the weather and can multiply the severity of storms.
Most people have noticed that the weather around the world is becoming increasingly radical. Storms are stronger, droughts are longer and weather "extremes" are becoming far more common.
Why does this matter? Because people need food to survive, and radical weather plays havoc with the food supply. The more unpredictable the weather becomes, in fact, the more crop failures we'll see around the world.
This brings up the all-important issue of food security. Where will YOUR food come from if the national food supply you depend on suffers serious disruptions due to freak weather?
In addition to his lab work, Adams is also the (non-paid) executive director of the non-profit Consumer Wellness Center (CWC), an organization that redirects 100% of its donations receipts to grant programs that teach children and women how to grow their own food or vastly improve their nutrition. Click here to see some of the CWC success stories.
With a background in science and software technology, Adams is the original founder of the email newsletter technology company known as Arial Software. Using his technical experience combined with his love for natural health, Adams developed and deployed the content management system currently driving NaturalNews.com. He also engineered the high-level statistical algorithms that power SCIENCE.naturalnews.com, a massive research resource now featuring over 10 million scientific studies. | <urn:uuid:2f15be77-4198-4b5f-9541-ffd0e00b8c76> | {
"date": "2014-11-01T04:04:16",
"dump": "CC-MAIN-2014-42",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1414637903638.13/warc/CC-MAIN-20141030025823-00051-ip-10-16-133-185.ec2.internal.warc.gz",
"int_score": 4,
"language": "en",
"language_score": 0.944086492061615,
"score": 4.03125,
"token_count": 642,
"url": "http://www.naturalnews.com/027773_pollution_crop_failures.html"
} |
Roadway Speed Limits
HOW SPEED LIMITS ARE ESTABLISHED
The City of Colorado Springs uses the 85th percentile speed of free flowing traffic as its basic factor when establishing speed limits.
Radar, and other methods are used to collect speed data from random vehicles on roadways. This speed is subject to revision based upon factors such as: roadway geometrics, parking, crash experience, pedestrians, curves, surrounding development, and engineering judgment. This practice is in accordance with the Manual on Uniform Traffic Control Devices which has been adopted by the State of Colorado.
In the final analysis, it is the judgment of the city traffic engineer that determines which, if any, of the factors in the speed study warrant an adjustment of the 85th percentile speeds. Once all variables are considered and a speed limit is established, traffic should flow at a safe and efficient level.
UNPOSTED SPEED LIMITS
Per city code 10.5.102: Where speed limits are not posted, and where no special hazard exists, the following speed shall be lawful: twenty five (25) miles per hour on streets and highways and fifteen (15) miles per hour in alleys.
For more information contact, Traffic Operations at 719-385-5908 or [email protected].
The above document(s) are Adobe® Acrobat® PDF files and may be viewed using the free Adobe® Acrobat® Reader™. Most newer web browsers already contain the Adobe® Acrobat® Reader™ plug-in. However, if you need it, click on the "Get Acrobat® Reader™" icon to download it now. | <urn:uuid:106ff790-7860-4e32-9e64-bce3b90014da> | {
"date": "2016-08-31T04:31:13",
"dump": "CC-MAIN-2016-36",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-36/segments/1471983577646.93/warc/CC-MAIN-20160823201937-00150-ip-10-153-172-175.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.8855084776878357,
"score": 2.5625,
"token_count": 339,
"url": "https://www.springsgov.com/Page.aspx?NavID=3665"
} |
Why does it rain so much in Livingstone at this time of the year? Why is the sun shining today? What makes a rainbow? These are some of the questions that the children of the Natebe Primary School were asked to consider at last Saturday’s Kids Club.
Some gave good answers to these questions, while others had no idea at all. Lion Encounter volunteers; Catherine from Australia and Charlotte from Norway conducted a lesson on weather and, by the end of it, all of the students were able to shout out the answers to these questions and many more.
The children were also taught that being able to predict what the weather would be like can help them plan what they want to do. They were given the example of someone planning a party in their garden. If the forecast was for rain, they could either change the day or the venue for the party. Without that understanding of weather conditions, their party would literally be a washout. | <urn:uuid:aa6b4625-6303-4907-8091-5ff5e75220f2> | {
"date": "2017-08-18T22:09:26",
"dump": "CC-MAIN-2017-34",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886105187.53/warc/CC-MAIN-20170818213959-20170818233959-00176.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9900993704795837,
"score": 2.90625,
"token_count": 192,
"url": "http://lionalert.org/article/Whether_the_weather_be_fine"
} |
FauxCrypt is an algorithm for modification of a plaintext document that leaves it generally readable by a person but not readily searched or indexed by machine. The algorithm employs a dictionary substitution of selected words, and an obfuscating transposition of letters in other words. The obfuscation is designed to leave the words understandable, although they are badly spelled. FauxCrypt is free, open source software, with source code available.
fauxcrypt is an alhroitgm for modifictaion of a planitext documnet taht laeves it gneerally raedable by a person but not raedily saercehd or idnexed by macihne. the alhroitgm empyols a dicitnoary subtsituiton of selected wrods, and an obfusctanig trnasposition of lteters in ohter wrods. the obfusctaion is dseigned to laeve the wrods udnertsnadable, aghtuolh tehy are badly slelpde. fauxcrypt is fere, open suorce sfotwaer, with suorce code available.
Programmers - Have you implemented FauxCrypt in another programming language? Have you designed a GUI for FauxCrypt on a particular platform? Contact the project leader to contribute your code.
|Reach the FauxCrypt project leader at| | <urn:uuid:8e0734b2-19e8-424f-bf2e-02d6d110bfee> | {
"date": "2016-02-12T01:14:27",
"dump": "CC-MAIN-2016-07",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701162938.42/warc/CC-MAIN-20160205193922-00022-ip-10-236-182-209.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.838957667350769,
"score": 2.875,
"token_count": 299,
"url": "http://fauxcrypt.org/"
} |
Pilgrim Nuclear Power Station (Entergy Nuclear)
A group of Massachusetts citizens is planning to use the Clean Water Act to sue the Entergy Corporation for environmental violations at their Pilgrim Nuclear Power Station. The group says the company should pay nearly a billion dollars in penalties. The citizens' lawyer, Meg Sheehan, laid out the details of the case to host Steve Curwood, and Vermont Law School Professor Patrick Parenteau explained the wider context.
CURWOOD: From the Jennifer and Ted Stanley Studios in Boston, this is Living on Earth. I'm Steve Curwood. A group of citizens has filed an intent to sue the Entergy Corporation, alleging that the Pilgrim Atomic Power Station in Plymouth Massachusetts has violated the Clean Water Act.
They say the plant has polluted Cape Cod Bay, massively killed local fish populations, and the operators kept inadequate environmental records. Entergy could be liable for over 800 million dollars in penalties. Citizens have the right to sue when the government fails to enforce the Clean Water Act so if the EPA does not step in within 60 days, this plaintiff group can proceed. Their lead attorney is Meg Sheehan of EcoLaw.
SHEEHAN: Well, at this point we're alleging they have violated the federal Clean Water Act on over 33,000 separate occasions by discharging pollutants at levels that exceed those permitted in their Clean Water Act permit, and also that they’re failing to adequately monitor and report some of the discharges that they’re dumping out into the bay.
Since about 2000 when Entergy took over Pilgrim from Boston Edison, Entergy has failed to obtain EPA’s approval for their marine monitoring plan. To us that’s a really egregious violation because when Pilgrim was built in the ’70s, there was great concern in the scientific community among the fishermen and the fisheries experts that there would be this terrible impact and these very strict provisions were put in for regulatory oversight. But when Entergy came in, they pretty much told the regulators that they weren’t willing to participate with this oversight advisory committee, so that’s the crux of our allegations.
CURWOOD: How is it that Entergy is killing thousands of fish, you say?
SHEEHAN: First of all, they’ve taken 510 million gallons of cooling water a day from Cape Cod Bay. Any kind of marine life, whether it’s a fish or plankton – anything that can’t avoid this velocity of the pumps gets sucked into the plant – and some of the bigger fish get slammed against these screens and trash racks that they have. The fish are killed either than being impinged and killed on the screens, or they’re sucked in and basically fried. Back in the 1990s, the state Marine Fisheries Department stated that Pilgrim’s operation had killed off up to 40 percent of the winter flounder population.
CURWOOD: Why did you decide to use the citizen law provision in the Clean Water Act to launch this case?
SHEEHAN: Well, really we had no other option. We’d been talking with federal regulators and state regulators since early this year, urging them to look closely at this, to look closely at these monitoring reports and look at the violations and to review Entergy’s permit which expired 16 years ago to make sure that they were doing the proper monitoring, and no action was taken. This plant is going to be operating for another 20 years from what we understand, so we just felt this was our only option.
CURWOOD: What’s your ultimate goal here - do you want them to clean up their act or do you just want them to shut this nuclear power plant down?
SHEEHAN: I think it’s unrealistic to think that we’d be able to shut it down. We’ve accepted the reality that it has been relicensed by the NRC, and our view is that if it’s going to continue to operate with this kind of environmental destruction, is unacceptable, these kinds of violations are unacceptable, and the EPA really does have to take a solid look at this and update the permit.
CURWOOD: Meg Sheehan is an attorney with Eco Law, thank you so much.
SHEEHAN: Thank you very much.
CURWOOD: Living on Earth contacted the Entergy Corporation seeking comment on the potential lawsuit. Pilgrim Station Spokesperson Carol Wightman sent this statement: “Entergy takes its environmental responsibilities and any allegation of noncompliance seriously. We will respond to the notice of intent after we have had a chance to thoroughly review the specific allegations. We note that EcoLaw unsuccessfully raised a number of these allegations in the NRC license renewal proceeding for Pilgrim Station.”
Well, to assess the wider implications of this case we turn now to Patrick Parenteau, Professor of Law at Vermont Law School. Welcome to Living on Earth!
PARENTEAU: Thank you Steve, good to be here!
CURWOOD: Now, how have citizens used the Clean Water Act to sue nuclear power companies in the past?
PARENTEAU: It’s very rare. Nuclear power plants, of course, are regulated primarily by the Nuclear Regulatory Commission, federal law actually preempts state law, and even preempts the Clean Water Act, when it comes to radiological health and safety issues, so you don’t see very many citizen suits against nuclear power plants under the Clean Water Act. So this case does represent sort of taking it to a whole ’nother level in terms of using the Clean Water Act against a nuclear power plant.
CURWOOD: This group is looking for almost a billion dollars in damages – how realistic is that?
PARENTEAU: Well, I don’t think that’s realistic. I understand the citizens have alleged that because the statutory maximum penalty is over 32 thousand dollars per day per violation, and they’re alleging tens of thousands of violations going back many years. It’s unheard of that a court would actually award such a massive amount of penalties.
It is possible, in these cases, to have penalties over a million dollars. In fact, a group called Earth Island sued the San Onofre nuclear power plant in California back in the ’90s and achieved a 17 million dollar settlement in that case. So you can think about a large, potential damage award, but nothing in the billion-dollar range.
CURWOOD: Who would get the money?
PARENTEAU: Well, it either goes to the United States Treasury, if the court assesses the penalty, or, if there’s a settlement agreement, the plaintiffs, the citizen group, and the plant owner, Entergy, could create what are called environmental credit projects. In fact, there was one done for Boston Harbor many years ago. And these are projects that improve water quality, sometimes they restore wetlands, sometimes they create public education programs, there’s a variety of things that could be agreed to in a settlement agreement – in lieu of a penalty going to the Treasury, the money would go to an environmentally beneficial project.
CURWOOD: Now, I understand that the plaintiffs are basing some of their allegations on the company’s own reports and filings themselves.
PARENTEAU: That’s correct. They’re called discharge monitoring reports, they’re required by law, they’re required to be made public. The courts have said these documents are in the nature of an admission of liability, so that gives the plaintiffs in these cases the upper hand. Now, the real question is going to come down to: do the discharge monitoring reports actually reveal the violations that the plaintiffs have alleged, and that will turn on how you interpret the terms of the permit.
These discharge permits are huge documents with many provisions and conditions and terms. So I anticipate that there will be a lot of argument about what the permit actually requires, and whether the discharge monitoring reports are actually showing a violation or not.
CURWOOD: Now, I know that you don’t have a crystal ball there at the Vermont Law School, but what kind of chance do you think they have for this case?
PARENTEAU: I think they have a chance of either getting a judgment in their favor on some of these allegations, or maybe even more realistically a settlement. The California case, again, comes to mind. In that case, a senior judge in that case was appointed as the quote “mediation judge.”
And these mediation judges have tremendous power to kind of force the parties into a settlement agreement basically saying neither one of you can be sure who is going to win, and you both have a lot to lose, so I am going to meet with you and require that you’d make a serious effort to try to settle your differences. I could see something like that coming out of this case.
CURWOOD: What do you think might be the financial impact of this case on Entergy at the end of the day?
PARENTEAU: Yeah, that’s really hard to judge. It’s not going to be anywhere near the maximum that the citizens are seeking, but I think it could be significant enough that Entergy may reevaluate the economics of the continued operations of a plant like this. These plants, these older plants, are reaching the end of their useful lives, of their economic lives, so a major judgment against them for a water quality violation could tip in the favor of shutting it down earlier rather than later.
CURWOOD: To what extent to you anticipate this tactic being replicated with other nuclear power plants now?
PARENTEAU: Well, there’s a lot of these nuclear plants, the Indian Point plant in New York, the Oyster Creek plant in New Jersey and many others - that are in the relicensing process. They’re older plants, mostly over 40 years old, they probably were not certainly state-of-the-art when they were built, they’ve demonstrated that they had significant impacts on water quality and other environmental conditions, so I guess I would expect more citizen groups located in the vicinity of these plants to be looking for every possibility of either shutting them down, which is happening in some places, or requiring them to install much better technology to protect the environment. So, I think this case may signal - I don’t know if you call it a wave - but this isn’t the last of these kinds of cases that we’re going to see.
CURWOOD: Pat Parenteau is Professor of Law at the Vermont Law School, thanks so much for joining us!
PARENTEAU: You're welcome, Steve!
Living on Earth wants to hear from you!
P.O. Box 990007
Boston, MA, USA 02199
Donate to Living on Earth!
Living on Earth is an independent media program and relies entirely on contributions from listeners and institutions supporting public service. Please donate now to preserve an independent environmental voice.
Major funding for Living on Earth is provided by the National Science Foundation.
Kendeda Fund, furthering the values that contribute to a healthy planet.
The Grantham Foundation for the Protection of the Environment: Committed to protecting and improving the health of the global environment.
Contribute to Living on Earth and receive, as our gift to you, an autographed copy of one of Mark Seth Lender's extraordinary hummingbird photographs. Follow the link to see Mark's current collection of photographs. | <urn:uuid:469478c6-9fa5-4deb-b240-9520849aac66> | {
"date": "2016-09-26T10:26:18",
"dump": "CC-MAIN-2016-40",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738660746.32/warc/CC-MAIN-20160924173740-00214-ip-10-143-35-109.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9552183151245117,
"score": 2.515625,
"token_count": 2414,
"url": "http://www.loe.org/shows/segments.html?programID=12-P13-00041&segmentID=1"
} |
Protein is beneficial because it is made of amino acids.
Amino acids are the body’s building blocks and are instrumental in forming cells, repairing tissue, making antibodies, building nucleoproteins (RNA/DNA), carrying oxygen throughout the body, assisting muscle activity, as well as being part of the enzyme and hormonal system.
The human body requires approximately 20 amino acids in order to synthesize proteins. About half of the amino acids are made by the body and so don’t need to be in the diet – these are known as the ‘nonessential’ amino acids (not essential in the diet).
The remaining amino acids (actually 9 for adults, 10 for the young) are not made in the body, so are obtained only from food – these are the ‘essential’ amino acids (essential in the diet).
Protein in the human body
Take away the water and about 75 percent of your weight is protein. This chemical family is found throughout the body. It’s in muscle, bone, skin, hair, and virtually every other body part or tissue. It makes up the enzymes that power many chemical reactions and the hemoglobin that carries oxygen in your blood.
At least 10,000 different proteins make you what you are and keep you that way. Twenty or so basic building blocks, called amino acids, provide the raw material for all proteins. Following genetic instructions, the body strings together amino acids.
Some genes call for short chains, others are blueprints for long chains that fold, origami-like, into intricate, three-dimensional structures. Because the body doesn’t store amino acids, as it does fats or carbohydrates, it needs a daily supply of amino acids to make new protein.
Source: Harvard School of Public Health
Appropriate Dietary Protein Intake
Protein is not stored in the body as such, unlike fat (in fat cells) and glucose (in muscle or liver).
Because muscles, for example, are built from protein, we need to consume, and synthesize, enough protein to maintain healthy, hard-working muscles.
What about excessive protein consumption?
Be wary of high-protein diets, which may also be high in fat and may lead to high cholesterol, heart disease or other diseases, such as gout.
A high-protein diet may put additional strain on the kidneys when extra waste matter (the end product of protein metabolism) is excreted in the urine.
What about insufficient protein consumption?
Lack of protein on the other hand, can cause growth failure, loss of muscle mass, decreased immunity, weakening of the heart and respiratory system, and death. Protein malnutrition leads to the condition known as kwashiorkor, suffered especially by people in regions of the world where protein is not available.
Adequate protein consumption
A nutritionally balanced diet provides adequate protein, and a vegetarian diet can provide the proper combination of plant proteins to achieve this.
Two to three servings of protein-rich food supplies the daily needs of most adults, depending upon age, medical conditions, and the type of diet employed. Select lean meat, poultry without skin, fish, dry beans, lentils, and legumes often, as these are the protein choices lowest in fat content.
Food Sources of Protein
If the protein in a food supplies enough of the essential amino acids, it is called a complete protein; if not, it is an incomplete protein.
The protein content of cooked meat and dairy products is from 15% to 40%, and that of cooked cereals, beans, lentils, and peas only from 3% to 10%.
Animal dietary protein sources
All meat and other animal products are sources of complete protein. For example:
- beef, lamb, pork
- poultry, eggs
- fish, shellfish
- milk and milk products
Vegetable dietary protein sources
Plant foods contain the same amino acids as animal foods, but in differing amounts. Protein in foods such as most grains, fruits, and vegetables are considered incomplete proteins, being either low in, or lacking, one of the essential amino acids.
Plant protein sources can be combined with other plant or animal products to form a complete protein; eg rice and beans, milk and wheat cereal, corn and beans. Plant foods considered complete proteins:
- Soy foods (tofu, tempeh, miso, and soy milk)
- Sprouted seeds (each type of sprout differs in nutrient proportions, so eat a variety)
- Grains (especially amaranth and quinoa, highest in protein)
- Beans and legumes (especially when eaten raw)
- Spirulina and chorella or blue-green algae (over 60% protein)
Recommended Dietary Allowance (RDA) for Protein
A minimum daily intake of protein is about .36 grams per lb or 0.8 grams per kilogram of body weight, while excess protein is defined as anything more than twice that amount. For an average-build 155 lb/70 kg man in good health, the RDA amounts to 56g as a minimum, but less than 112g per day.
A percentage of the population, however – growing children, pregnant and lactating women, the elderly, anyone undergoing severe stress (trauma, hospitalization, surgery), disease or disability – need more protein than the RDA. Also anyone doing endurance training (not resistance training which builds muscle and uses protein more efficiently) requires higher dietary protein – from ¼ to ½ as much again per day.
Protein and Food Servings
Common serving sizes for a healthy adult consuming 2 to 3 servings per day to provide adequate protein:
- 2-3 ounces/56-85g of cooked lean meat, poultry, and fish
- ½ cup of cooked dry beans, lentils, or legumes
- 1 egg or 2 tablespoons of peanut butter (equivalent to 1 ounce/28g of lean meat).
For example, cereal with milk for breakfast, a peanut butter and jelly sandwich for lunch, and a piece of fish with a side of beans for dinner, totals about 70 grams of protein for the day. 1 gram of protein is equal to 4 calories, so a food serving containing 20 g or 0.7oz of protein equates to 80 calories.
3 Times a Day
As a rule of thumb then, three times each day you should consume about the amount of protein source in a food serving which could be held in the palm of your hand — about the size of a chicken breast.
This constitutes about 30% of your calories. Have whatever fat comes associated with that protein, and add the equivalent of three handfuls of high-fiber vegetables.
- Alvestrand, A., Ahlberg, M., Fürst, P., & Bergström, J. (1983). Clinical results of long-term treatment with a low protein diet and a new amino acid preparation in patients with chronic uremia. Clinical nephrology, 19(2), 67-73. study
- Kies, C. (1981). Bioavailability: a factor in protein quality. Journal of agricultural and food chemistry, 29(3), 435-440. study
- Phillips, S. M., Moore, D. R., & Tang, J. E. (2007). A critical examination of dietary protein requirements, benefits, and excesses in athletes. International journal of sport nutrition and exercise metabolism, (17 Suppl), S58-76. study | <urn:uuid:474e1098-4e16-428c-9607-de1ef6fce419> | {
"date": "2016-05-01T00:25:29",
"dump": "CC-MAIN-2016-18",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461860113541.87/warc/CC-MAIN-20160428161513-00062-ip-10-239-7-51.ec2.internal.warc.gz",
"int_score": 4,
"language": "en",
"language_score": 0.931343138217926,
"score": 3.796875,
"token_count": 1550,
"url": "http://www.everydiet.org/999/protein-benefits"
} |
Dutch food researcher NIZO has developed a new technology that records and analyzes the sound of the tongue rubbing against food, which can be used to predict the sensory effects of food innovations. This technology would enable food scientists to determine the creaminess or astringency of new foods.
When formulating for low-fat or low-carb products, developers have to deal with a significant change in mouthfeel and need to compensate for these changes. Currently, standard rheology measurements are used to determine viscosity. More relevant is the way a product changes the friction of surfaces, and for this reason tribology is often attempted. However, the plastic or stainless steel surfaces used in tribometers cannot sufficiently mimic the soft, mucous-coated papillary surface of a live tongue.
A new technology, called “acoustic tribology” records and analyzes the sound generated by rubbing or tapping of the tongue in the mouth during mastication. The inventor, George van Aken, explains that the sound produced by rubbing or tapping is caused by the same vibrations of the tissue that are sensed by the mechanoreceptors in the tongue that signal the sensation of roughness, stickiness, structural coarseness of any food (fluid, semi-solid, and solid). Acoustic tribology is non-invasive, measures in real time, and can be applied directly on human subjects without any preconditioning or preparation of the body surfaces.
“The advantage of acoustic tribology is that we measure where the consumer experiences the food: in the mouth,” said van Aken. “It gives objective information about the suppleness of movements and thus the lubricating behavior of the food on the tongue.” | <urn:uuid:0df78b5b-0fa3-4953-bc3b-cce7e7a6eb94> | {
"date": "2015-02-02T00:15:36",
"dump": "CC-MAIN-2015-06",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-06/segments/1422122122092.80/warc/CC-MAIN-20150124175522-00201-ip-10-180-212-252.ec2.internal.warc.gz",
"int_score": 4,
"language": "en",
"language_score": 0.927157998085022,
"score": 3.703125,
"token_count": 355,
"url": "http://www.ift.org/Food-Technology/Daily-News/2013/February/01/Listening-to-what-the-tongue-feels.aspx"
} |
Imagine recharging your cell phone without plugging it in. Or powering your iPod while you walk around the house with it. Researchers at the Massachusetts Institute of Technology (MIT) have taken the first steps towards such wireless energy transfer by conceptualizing a way to transmit electricity over room-size distances. One day, they say, the technology could power whole households or even motor vehicles wirelessly.
The MIT team calls the concept a nonradiative electromagnetic field. It involves two simple ring-shaped devices made of copper. One, connected to a conventional power source, would generate magnetic fields similar to those that power electric motors. These fields would stretch outward a few meters and would only affect the receiving--or companion device--which would be outfitted with a second copper ring tuned to a specific frequency. Team leader Marin Soljačić says he began working on the concept because he wanted to find a better alternative to having to recharge his laptop computer and cell phone so frequently. He presented the team's findings today at an American Institute of Physics forum in San Francisco, California.
The technology shouldn't harm other electronics or humans, says team member John Joannopoulos. Computer simulations, he says, have shown that the essential mechanism--magnetic resonance--means little or no power is transferred to extraneous objects, such as computer hard drives or even magnetic stripes on credit cards. Joannopoulos says that "for certain designs, the effect on a person is weaker than the Earth's own magnetic field."
Although the concept has yet to be verified--the team is preparing prototypes that will be tested sometime next year--Joannopoulos says he is confident the tests will be successful. "Our computational experiments have been as close to reality as they can be," he says. In principle, he adds, "you could power everything in the room," and looking much farther into the future, he predicts that someday this technology could power motor vehicles down highways, using transmitters buried in the pavement.
It's an "extremely original" and "extremely exciting" idea, says physicist Mordechai Segev of the Technion-Israel Institute of Technology in Haifa. If the concept is found to be reasonably efficient, he says, it "could revolutionize technology by making many battery-powered products lighter and smaller." | <urn:uuid:7ed216db-3ccd-4a22-b24e-5f9a3a11858d> | {
"date": "2014-07-23T04:22:15",
"dump": "CC-MAIN-2014-23",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997873839.53/warc/CC-MAIN-20140722025753-00184-ip-10-33-131-23.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9597718119621277,
"score": 3.46875,
"token_count": 468,
"url": "http://news.sciencemag.org/print/physics/2006/11/outlets-are-out"
} |
What is a Mason Bee?
Osmia lignaria, commonly known as the mason bee, is a megachilid bee that makes nests in reeds and natural holes, creating individual cells for their brood that are separated by mud dividers. This bees pollenates early spring fruit bloom in Canada and the United States.
Why are Mason Bees so important?
Pollination is the main reason we want Mason Bees in our yards and city. Mason Bees are hard working pollinators and pollinate early.
Why is pollination so important?
Pollination is the sexual reproductive process known as fertilization, that plants need to develop fruit and seeds. The pollen grains are the male part of the plant and need to be transfered from one flower to another. This is where pollinators come into play. Without proper pollination, you don’t get fruit and seeds from plants.
Why not Honey Bees?
A lot of people like honey but a honey bee hive can be added work. If you want to take on the responsibility of an inner city bee hive for honey then by all means, go for it. More honey is good! But another reason is something called Colony Collapse Disorder, a problem honey bee keepers are facing all over the world. Honey bees are just dying. This doesn’t seem to affect the Orchard Mason Bee.
Do Mason Bees sting?
Mason bees are even more docile than the honey bee. They will only sting if they are in perceived danger and will never swarm or attack. This is another reason Mason Bees are such a perfect pollinator for our yards, especially yards with small children.
How do I encourage Mason Bees in my garden?
Make or buy a bee house. They are simple to make. Mason bees enjoy burrowing into tiny holes that occur either naturally in wood or reeds. If looked after properly, you’ll find that the mason bee population just keeps growing and growing. The bees will create cocoons in the burrowing holes and hibernate there over winter, only to emerge in the spring when needed.
For more Mason Bee information: | <urn:uuid:658c2d84-9d86-41fe-a349-110710c25b84> | {
"date": "2018-09-18T19:16:24",
"dump": "CC-MAIN-2018-39",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267155676.21/warc/CC-MAIN-20180918185612-20180918205612-00056.warc.gz",
"int_score": 4,
"language": "en",
"language_score": 0.9550524950027466,
"score": 3.609375,
"token_count": 439,
"url": "https://www.dirtboys.ca/2013/03/mason-bees-and-their-importance-in-nature/"
} |
Fungi are world's fastest fliers
Scientists have discovered the fastest fliers in nature and, somewhat surprisingly, they're fungi!
Ohio-based researcher Nicholas Money and his colleagues at Miami University made the discovery by using ultra-fast cameras capable of taking 250,000 frames per second. Down the lens they were studying members of two fungal families - the ascomycetes and the zygomycetes - that do the essential but unsalubrious job of breaking down animal dung. These fungi rely on their spores passing harmlessly through the guts of grazing animals so that they land, quite literally, in the remains of their lunch. But animals generally avoid grazing in areas where another animal has defaecated, leaving fungi like these with a problem. Their solution is to have evolved the mycological equivalent of a "super-soaker" squirt gun - they fire their spores from tiny fluid-filled fruiting bodies so that they land in patches of uncontaminated grass ready for the next browsing ruminant. But although scientists realised that the fungal launchpad must be incredibly powerful, it was too fast and too small to surrender its secrets, at least until now.
Writing in this weeks PLoS ONE the team have successfully made fungal ballistic measurements of spore trajectories to reveal that these organisms are firing their microscopic projectiles, which measure just a fraction of a millimetre across, at speeds exceeding 25 metres per second and at rates corresponding to 180,000 times the acceleration due to gravity. This is sufficient to propel the spores up to 2.5 metres away from the parent dung pile.
The team were also able to get a handle on how the organisms achieve their fungal feat. A concentrated mixture of sugars, alcohols and other metabolites inside the fungus and its fruiting body pulls in water by osmosis, priming the gun at a pressure about four times that of the atmosphere. At the right moment the structure ruptures and the pressure drives out the spores. According to the researchers the images of these fungal ejaculations are so pretty that they've set them to music and plan to post them on YouTube! | <urn:uuid:7de551ce-f2b5-42f7-895a-1cf7c59d0473> | {
"date": "2017-09-23T14:43:44",
"dump": "CC-MAIN-2017-39",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818689686.40/warc/CC-MAIN-20170923141947-20170923161947-00456.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9581707715988159,
"score": 3.265625,
"token_count": 442,
"url": "https://www.thenakedscientists.com/articles/science-news/fungi-are-worlds-fastest-fliers"
} |
More than one million new cases of skin cancer are diagnosed each year in the United States, making it the most commonly diagnosed type of cancer.
Overview of the Skin
The skin is the largest organ in the body. It protects against germs, covers internal organs, and helps regulate the body’s temperature. The two main layers of the skin are the epidermis and the dermis. The epidermis forms the top, outer layer of the skin. The dermis is a thicker layer beneath the epidermis.
Skin cancer generally develops in the epidermis. The three main types of cells in the epidermis are squamous cells, basal cells, and melanocytes. Squamous cells form a flat layer of cells at the top of the epidermis. Basal cells are round cells found beneath the squamous cells. Melanocytes are pigment-producing cells that are generally found in the lower part of the epidermis.
Types of Skin Cancer
Skin cancer is often categorized as melanoma or non-melanoma. Melanoma is a cancer that begins in melanocytes. It is less common than non-melanoma skin cancer, but tends to be more aggressive. In 2006 an estimated 62,000 individuals in the U.S. will be diagnosed with melanoma, and close to 8,000 will die of the disease.
The most common type of non-melanoma skin cancer is basal cell carcinoma. This type of cancer rarely spreads to distant sites in the body, but it can be disfiguring and may invade nearby tissues.
The second most common type of non-melanoma skin cancer is squamous cell carcinoma. Although this type of cancer is more likely to metastasize (spread to lymph nodes or other sites in the body) than basal cell carcinoma, metastasis is still rare. Both basal cell carcinoma and squamous cell carcinoma most commonly develop on sun-exposed parts of the skin, but can develop on other parts of the skin as well.
An alarming trend in both melanoma and non-melanoma skin cancers is that the frequency of these cancers in children and young adults appears to be increasing. This highlights the importance of prevention at all ages.
Because of their very different characteristics and treatment, melanoma and non-melanoma skin cancer are discussed further in separate sections.
Go to the Melanoma Information Center
Go to the Non-Melanoma Information Center
Christenson LJ, Borrowman TA, Vachon CM et al. Incidence of Basal Cell and Squamous Cell Carcinomas in a Population Younger Than 40 Years. JAMA. 2005;294:681-690. | <urn:uuid:8e767ccc-c9fe-4b2c-807b-ff5b913dfdb9> | {
"date": "2014-10-01T14:14:30",
"dump": "CC-MAIN-2014-41",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1412037663460.43/warc/CC-MAIN-20140930004103-00268-ip-10-234-18-248.ec2.internal.warc.gz",
"int_score": 4,
"language": "en",
"language_score": 0.8897103071212769,
"score": 3.84375,
"token_count": 559,
"url": "http://www.texasoncology.com/types-of-cancer/skin-cancer/"
} |
Height 9.5 Width 6.4
This map packet includes five maps printed on both sides. Maps one through four features gold, silver and gem deposits. The fifth map, side one, shows gold occurrances taken from an 1871 map. Side 2 is a page outlining the history of Oregon's mining operations and how to find and mark your own gold deposits. Every map tells the greatest story! Reported and known occurrences of gold and silver, as well as the popular gem deposits are identified in red. Many secrets of prospecting and mining are revealed in this collection! This publication is attractively packaged for display.
Map identifies locations of: gold and silver, agate, apache tears, bloodstone, carnelian, chalcedony, feldspar, fossils, garnet, geodes, jasper, limb casts, nodules, obsidian, opal, petrified wood, quartz, rhondonite, rhyolite, sagenite, serpentine, sunstones, thunder eggs, tourmaline, jade
Sign in to post a review | <urn:uuid:831af30f-5703-4d09-9eea-af5ffc68e8c1> | {
"date": "2018-01-18T11:40:03",
"dump": "CC-MAIN-2018-05",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084887253.36/warc/CC-MAIN-20180118111417-20180118131417-00256.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9003551006317139,
"score": 2.640625,
"token_count": 222,
"url": "http://www.amateurgeologist.com/oregon-gold-silver-gems-maps-then-and-now.html"
} |
Short Essay Questions Key
1. What is the setting and who are the characters which open the play?
The Delaney kitchen is dark and decorated with last night's dinner dishes. Doctor Delaney (Doc) is up preparing his breakfast. Marie, a young boarder staying with the Delaneys, is up early because she has to study for a biology exam - and not concentrate on her major studies to be an artist.
2. Who is Lola and how does she make her entrance into the play?
Doc's wife, Lola, gets up complaining about not being able to sleep late. She tells Doc she had a dream about losing her young dog, Little Sheba, again.
3. What creates an argument between Doc and Lola early into the play?
Doc listens as his wife goes on to express how proud she is that he has stopped drinking and been sober for 11 months. Lola suggests Doc come with her to a movie instead of going to help a struggling alcoholic on Skid Row that night. Doc tells Lola to take Marie with her instead. When Lola tells him Marie will be busy with Turk, the two begin to argue.
This section contains 3,280 words
(approx. 11 pages at 300 words per page) | <urn:uuid:a8531d92-a964-464e-8dbb-36995c25296a> | {
"date": "2014-08-21T01:01:28",
"dump": "CC-MAIN-2014-35",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500813241.26/warc/CC-MAIN-20140820021333-00280-ip-10-180-136-8.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.975910484790802,
"score": 2.734375,
"token_count": 262,
"url": "http://www.bookrags.com/lessonplan/comebacklittlesheba/shortessaykey.html"
} |
At health food stores, customers are complaining about food allergies more often now with statements like, “I can’t eat this”, or “I can’t eat that.” Why do so many people, especially those who frequent health food stores, believe they have food allergies? It’s funny, I’ve never heard anyone say, “I’m allergic to coffee, cookies, cakes, popcorn, pizza or candy”, and yet these are things that people ingest regularly without a second thought. So, what’s up with all these health food store shoppers who have food allergies?
Food allergies occur when the immune system overreacts to a protein molecule in the offending food. This can happen even with those who stick to food from health food stores. The body is unable to break down that particular protein molecule, so it reacts by trying to “get rid of it”. It produces a chemical called ‘histamine’ and symptoms appear in the form of rashes, hives, itching, wheezing, breathing problems, and lots of mucus being expelled through the mouth, nose, ears, lungs, or sexual organs. More serious reactions from food allergies are: vomiting, diarrhea, loss of consciousness, drop in blood pressure, or even death.
Intolerance to certain foods is different than true food allergies, and this is a more common complaint at health food stores. With food intolerances like lactose intolerance, where a person has difficulty breaking down the sugar in the milk, the symptoms are much milder. One may have some bloating, excess gas, cramping or diarrhea. While food intolerance is unpleasant, it is not life threatening like food allergies can be in some severe cases. The usual distresses are intolerances to wheat, soy, dairy and anything at health food stores that the shopper feels is too expensive.
The question that customers at health food stores ought to ask is not, “What food am I allergic to?” But rather, “Why is my immune system reacting to what should be health food?”
When determining specific food allergies and intolerances, some factors to consider are:
1. What is the trigger food?
2. When am I eating the trigger food?
3. Am I improperly combining fruits and vegetables or grains or meat and dairy?
The list can be quite exhaustive to hunt down the offending trigger food and how it is being consumed. In fact, most food allergy experts will tell you to keep a diary of everything you eat, and when you find the trigger food to just avoid eating it. Well, that sounds simple, but if you react to many things, including those that should be health food, it’s not so simple? Of course, you can eliminate the most common triggers to food allergies like: corn, wheat, eggs, dairy, and peanuts, but if that doesn’t work, then what? The truth is, your immune system can overreact to many substances. The best way to address food allergies is to strengthen your immune system and get in balance with all the systems of the body. Shop health food stores for specific foods to feed your 5 main systems equally: immune, endocrine, digestive, circulatory and respiratory systems. This may be difficult at typical health food stores, but searching online will produce results.
When you feed all your body parts with healthy, whole food nutrients, and eliminate the fake, processed foods in your life, your 5 systems can come into a perfectly natural balance. Then you can eat what you know you should be eating. It is better to strengthen the body’s systems with properly combined health food and ward off illness and disease the natural way. When you consume the right nutrients, the body operates at optimum levels for a more relaxed, healthy life. You have access to so much good nutrition at health food stores and plenty of options for avoiding food allergies or intolerances that it’s a shame to limit your choices unnecessarily. | <urn:uuid:838ae9e3-44e3-45df-81a3-26e1708b4005> | {
"date": "2018-04-23T11:37:26",
"dump": "CC-MAIN-2018-17",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125945942.19/warc/CC-MAIN-20180423110009-20180423130009-00096.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9492740631103516,
"score": 2.578125,
"token_count": 828,
"url": "http://hellested.info/?tag=diarrhea"
} |
The following links will take you to grade specific roadmaps. Each roadmap provides parents with information on the Common Core, tips on how you can support your child's learning, and context for talking to your child's teachers.
Below please find a link to the National PTA web page. On this site you will find The Parents’ Guide to Student Success, developed by the National PTA in partnership with educators and educational experts. There are guides for both ELA/Literacy and mathematics. There are separate guides for each grade level from K-12. The guides are written in both English and Spanish.
The guides include the key items that children should be learning at each grade level according to the Common Core State Standards, tips for parents on how to support their child’s learning at home, ways in which parents can build strong relationships with their child’s teachers, and how to plan for college and career.
Below you will find a link to the U.S. Department of Education's website. This link will take you directly to the Department's family resource and information page.
CT State Department of Education
Below you will find a direct link to the family information and resource page on the CT S.D.E. website.
Common Core State Standards
Select the link below to view a three minute video about the C.C.S.S.
Smarter Balanced Assessments
Follow the link below to view the Parent and Family page of the Smarter Balanced Assessment Consortium.
Connecticut Parent Information and Resource Center (PIRC)
families, teachers and other professionals who work on behalf of the children in Connecticut.
Connecticut Parent Advocacy Center (CPAC)
(CPAC) is a statewide nonprofit organization that offers information and support to families of children with any disability or chronic illness, age birth to 26.
Toolkit for Title I Parent Involvement | <urn:uuid:5e007fba-01d6-4b4c-a095-4d92980740a0> | {
"date": "2017-07-22T14:36:39",
"dump": "CC-MAIN-2017-30",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549424079.84/warc/CC-MAIN-20170722142728-20170722162728-00056.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9237571358680725,
"score": 2.796875,
"token_count": 390,
"url": "http://www.lebanonct.org/joomla/index.php/2013-03-22-18-08-36/family-resources"
} |
I am checking a student's homework.
The assignment is to print the amount of English letters to the console.
For some reason, what he did works (7th line):
char first = 'A';
char last = 'Z';
int amount = 0;
amount = ("%d - %d", last - first + 1);
printf("The amount of letters in the English alphabet is %d\n", amount);
This is one of the examples of usage of comma operator. In case of
("%d - %d", last - first + 1);
the LHS operand of the comma operator (
"%d - %d") is evaluated, result is discarded, then RHS (
last - first + 1) is evaluated and returned as the result. The result, is then assigned to
amount and thus, you have the
amount holding the result of the operation
last - first + 1.
C11, chapter §6.5.17, comma operator
The left operand of a comma operator is evaluated as a
voidexpression; there is a sequence point between its evaluation and that of the right operand. Then the right operand is evaluated; the result has its type and value.
FWIW, in this case,
"%d - %d" is just another string literal, it does not carry any special meaning. | <urn:uuid:3893869e-0538-4554-bd89-9f0d2dba068f> | {
"date": "2017-04-26T02:29:08",
"dump": "CC-MAIN-2017-17",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917121121.5/warc/CC-MAIN-20170423031201-00411-ip-10-145-167-34.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9186616539955139,
"score": 3.40625,
"token_count": 293,
"url": "https://codedump.io/share/w3XZPPlUuAWD/1/why-does-this-c-code-work-it-shouldn39t"
} |
With solar panel prices falling more than 80 percent in the last few years, many solar companies are turning their attention to reducing the cost of installing them. Two leading solar companies, Solon Energy, based in Berlin, and Trina Solar, based in Changzhou, China, have announced new designs for mounting solar panels to roofs—the companies say these designs can reduce the installation time by more than half, greatly reducing labor costs. The new designs reduce or eliminate the tools and hardware needed to install solar panels, and standardize solar installations, which have largely been ad hoc, reducing the time needed to design them.
While solar panels themselves used to account for most of the cost of large solar installations on commercial rooftops, the modules now account for about 40 percent of the cost. The rest comes from things like the necessary hardware, power electronics, and labor—which alone accounts for about 30 percent of the total.
Mounting solar panels on the flat rooftops of commercial installations typically involves anchoring long metal racks to the roof to create a framework that will angle the panels toward the sun and hold them together. Installers bolt the panels to this frame, wire the panels together, and electrically ground the racks.
Trina’s design gets rid of most of this metal framework. It starts with some simple changes to the solar panels themselves. Solar panels resemble framed pictures—they consist of solar cells sealed behind a piece of glass and held in place and protected by a metal frame. This frame is typically bolted to the metal rack framework that angles the panel toward the roof. Trina uses the frame of the solar panel itself to provide the framework. Special hardware locks into grooves cut into the frame, propping the panel at the correct angle without the need of any tools.
The company says this reduces installation time by two-thirds, and reduces the chance that stray bolts and screws might get caught under the framework and damage the roof. Savings in materials and labor costs can add up to a 10-cent-per-watt reduction in costs for solar power, a significant drop considering that solar panels now sell for less than $1 per watt.
While Trina modifies the solar panel’s metal frame, Solon eliminates it altogether. It takes an array of solar cells that have been sealed behind a layer of glass and then glues that to a plastic form that angles the cells toward the sun. This complete module is assembled in a factory, reducing the amount of work that needs to be done on site. Installers set the modules on the roof, link them together with plastic connectors (they also add some ballast), and plug wires together to establish electrical connections. Because the modules have no exposed metal, it isn’t necessary to ground them, which helps reduce costs. Solon says the design reduces the time needed for mechanically mounting the panels by 75 percent, and the time needed for making the electrical connections by half. (Solon says that the impact on costs varies widely, depending on factors like labor costs.)
Both designs come with some trade-offs—for example, to achieve economies of scale, the systems provide only one standard angle for pointing the panels at the sun. At some latitudes, the panels would generate more power if they were tilted more or less than that angle. | <urn:uuid:766f1010-ade0-42f7-bcf2-fc193aecf37e> | {
"date": "2019-04-25T12:36:46",
"dump": "CC-MAIN-2019-18",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578721441.77/warc/CC-MAIN-20190425114058-20190425140058-00456.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9310265183448792,
"score": 2.671875,
"token_count": 672,
"url": "https://www.technologyreview.com/s/428423/new-solar-panel-designs-make-installation-cheaper/"
} |
This is a list of the literary devices for which we should look out. Although it’s certainly not a comprehensive list, it should help you get your papers started.
Alliteration: the use of the same letter in a consecutive list (ex: brilliant blue ball)
Background: the historical and social context of a book. When was it written? What was happening at that time, and how did it influence the author?
Characterization/Character Development: how the characters change through the events of the story.
Citation: an acknowledgement of an outside source that contributed to your idea, paper, or presentation. There are always two citations in a paper: one in the text of your writing, and one on the Works Cited page at the end of the paper.
Conflict: any dispute between characters, tense situation, etc., that spurs the plot of a story.
Dynamic characters: characters who change.
Foil: a secondary character whose only purpose is to develop the main characters. These characters are usually “static” or “flat”; they do not develop themselves.
Free verse: any poetry that does not adhere to the strict rules of rhyming and meter (this includes visual poetry).
Metaphor: the comparison of an image with another person, place, or thing that gives additional insight (ex: “She is the apple of my eye.”)
Meter: the rhythm of a poem, counting accents and off-beats (usually used to emphasize important words!)
Personification: when the author gives inanimate objects human characteristics (ex: “The leaves of the tree nodded their heads.”)
Plot: the series of events in a story.
Setting: where a story takes place. This can set the tone for the whole story!
Static characters: characters who do no change.
Sub-plot: a less important (sometimes hidden!) plot within the main story.
Resolution: the end of a conflict.
Thesis (or Hypothesis): the main statement of a paper or oral presentation. Essentially, it gives the basic idea of what you will say.
Hopefully this can be of some help! | <urn:uuid:14ab9a1f-d363-4c52-a20d-7640f8d18526> | {
"date": "2019-03-24T02:52:06",
"dump": "CC-MAIN-2019-13",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912203168.70/warc/CC-MAIN-20190324022143-20190324044143-00216.warc.gz",
"int_score": 4,
"language": "en",
"language_score": 0.9318212270736694,
"score": 4.1875,
"token_count": 457,
"url": "https://www.bowmanacademics.com/literary-devices/literary-devices"
} |
Nurses impact lives every day. But once in a while, a nurse comes along who touches the lives of the world, and not just her patients. These women went above and beyond for the field of nursing. They served in wars, broke down racial barriers, and campaigned for women’s rights. They have become role models for women everywhere, not just nurses. However, nurses can be especially proud to share a title with these ten ladies.
1. Florence Nightingale
“The Lady with the Lamp” is the quintessential nurse figure. She cared for the poor and distressed, and became an advocate for improving medical conditions for everyone. In her early life, Nightingale mentored other nurses, known as Nightingale Probationers, who then went to on also work to create safer, healthier hospitals.
In 1894, Nightingale trained 38 volunteer nurses who served in the Crimean War. These nurses tended to the wounded soldiers and sent reports back regarding the status of the troops. Nightingale and her nurses reformed the hospital so that clean equipment was always available and reorganized patient care. Nightingale soon realized that many of the soldiers were dying because of unsanitary living conditions, and, after the war, she worked to improve living conditions.
While she was at war, the Florence Nightingale Fund for the Training of Nurses was established in her honor. After the war, Nightingale wrote Notes on Nursing and opened the Women’s Medical College with Dr. Elizabeth Blackwell.
International Nurses Day is celebrated on Nightingale’s birthday, May 12, each year.
2. Margaret Sanger
Best known as an activist for birth control and family planning, Margaret Sanger pioneered the women’s health movement. She distributed pamphlets with information on birth control and wrote on topics such as menstruation and sexuality. Her controversial opinions and disregard for the law often get Sanger in trouble. At one point she left to England under an alias in order to avoid jail.
In 1921, Sanger founded the American Birth Control League which eventually became Planned Parenthood. She began the Clinical Research Bureau in 1923 – the first legal birth control clinic in the US.
3. Clara Barton
Clara Barton grew up wanted to take care of people. When her father fell ill, Clara helped to care for him until his death. This inspired her to take an interest in nursing, although she first went to school to become a teacher.
During the Civil War, Barton organized medical supplies to be brought to the battlefields. Soon enough, she was allowed to go to the battles herself in order to care for wounded soldiers. Her father taught her to be a true patriot, and these ideals shown through during Barton’s years serving during the Civil War. In 1864, Barton became the “Lady in Charge” of Union hospitals, and the following year President Lincoln charged Barton with finding missing Union soldiers.
During a trip to Europe, Barton encountered the International Committee of the Red Cross, and was motivated to create a branch back in America. In 1873, Barton began the American Red Cross, dedicated to helping disaster victims. She served as the organizations first president.
4. Mary Eliza Mahoney
Mary Eliza Mahoney was the first African-American woman to become a nurse in the United States. Mahoney worked at the New England Hospital for Women and Children for 15 years before she was admitted into the adjacent nursing school. Mahoney dedicated her life to nursing, heading up the Howard Orphan Asylum for African-American children in New York. She was also one of the first members of the Nurses Associated Alumnae of the United States and Canada which later became the American Nurses Association.
In 1908, Mahoney co-founded the National Association of Colored Graduate Nurses which eventually became part of the ANA. Each year, the ANA honors Ms. Mahoney with an award that represents her dedication to nursing and ending racial segregation. She has been inducted into both the ANA and National Women’s Hall of Fame.
5. Anna Caroline Maxwell
Anna Caroline Maxwell was known as the “American Florence Nightengale.” During the Spanish-American War, Maxwell headed up the army nurses, thereby establishing the Army Nurse Corps. During WWI, Maxwell was given the Medal of Honor for Public Health.
Maxwell was an essential element to the progression of practical nursing. She began working at a hospital before she was formally trained, and after graduating the Boston City Training College for Nurses, Maxwell began the nurse training program at Montreal General Hospital. She also served as the superintendent of nurses at a number of east coast hospitals including Massachusetts General Hospital and St. Luke’s Hosptial. Maxwell was the first director of the New York Presbyterian Hospital which would become the Columbia School of Nursing.
6. Dorothea Lynde Dix
Dorothea Dix is best known for creating the first mental health system in the United States. Inspired by a trip to England, Dix returned to America curious how the US government treated the mentally unstable. Dix spent many year petitioning Congress, drafting legislation, and documenting her visits to various states.
Dix first succeeded with the construction of the North Carolina State Medical Society in 1849, dedicated to the care of the mentally ill. Dix also assisted with legislation that called for 12,225 acres of land to be used for the “insane,” with proceeds of its sale going to build mental asylums.
During the Civil War, Dix served as Superintendent of the Union Army Nurses, although she was eventually relieved of her duties after butting heads with Army doctors. She was a staunch believer in caring for everyone, though, and her nurses were some of the only caretakers of Confederate soldiers.
7. Ellen Dougherty
Ellen Dougherty, of New Zealand, was the first Registered Nurse in the world. New Zealand was the first country to initiate the Nurse Registration Act that allowed for legal registration of nurses prior to completion of training. Dougherty trained at the Wellington Hospital and was the matron at Palmerston North Hospital.
8. Mabel Keaton Staupers
Mabel Keaton Staupers was an advocate for racial equality in the field of nursing. Staupers served as the secretary of the National Associated of Graduate Colored Nurses. She advocated for the introduction of African-American nurses into the Army and Navy during WWII.
In 1945, she won the fight and all nurses, regardless of race, were to be included in the military. In 1950, Staupers dissolved the NAGCN as it re-aligned with the American Nursing Association.
9. Linda Richards
After receiving little training during her first attempt to become a nurse, Linda Richards enrolled as the first student in the first American Nurse’s training school. After graduating, she began work at Bellevue Hospital in New York. Recognizing the disorganization of keeping records, Richards developed a system to track individual records of each patient. The US and UK both readily adopted Richard’s system.
In 1874, Richards became the superintendent of the Boston Training School for Nurses and virtually turned the fledgling school around. Richards also traveled to England and was taught by Florence Nightingale. In her later years, Richards established the American Society of Superintendents of Training Schools and led the Philadelphia Visiting Nurses Society. In 1994, she was inducted into the National Women’s Hall of Fame.
10. Claire Bertschinger
Claire Bertschinger worked for the International Red Cross during the highly-publicized 1984 famine in Ethiopia. She regularly was seen on television, and helped to inspire Bob Geldof to create the Band-Aid charily single. While in Ethiopia, she ran a number of children’s feeding centers, although she was never able to feed everyone.
Along with Ethiopia, she also worked in Panama, Lebanon and Papua New Guinea. Her experiences motivated her to write a book on her work, entitled Moving Mountains. Bertschinger has received the Florence Nightengale Medal, the Woman of the Year Award and the Human Rights in Nursing Award.
The content provided above is copyrighted and owned by Scrubs Magazine and is used by Allheart.com with express permission by Scrubs Magazine. For all blogs by nursinglink, go to http://scrubsmag.com/author/nursinglink/. | <urn:uuid:b79cb4e2-2e25-4e46-9810-f9af277def98> | {
"date": "2013-12-12T02:13:33",
"dump": "CC-MAIN-2013-48",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386164346985/warc/CC-MAIN-20131204133906-00002-ip-10-33-133-15.ec2.internal.warc.gz",
"int_score": 4,
"language": "en",
"language_score": 0.9754720330238342,
"score": 3.765625,
"token_count": 1749,
"url": "http://blog.allheart.com/2009/10/27/10-most-influential-female-nurses-of-all-time/?like=1&source=post_flair&_wpnonce=7b62a7b6bf"
} |
To test the ability of the DNase I (attached to the hydrogels) to cleave DNA
- In addition to the 5 samples of hydrogels with DNA, a sample of hydrogel without DNase and a sample of DNase I on its own were prepared as controls. The amount of DNase used for the control was equivalent to the amount of enzyme in the 500 uL hydrogel
- 1 uL of DNA from Chem 571 last year was added to each sample
- The DNA and DNase were mixed together for 3-4 hours.
- The hydrogels were then removed from the samples and the liquid was then frozen using liquid nitrogen. The sampes were then lyophilized overnight.
No data was collected today.
- This reaction was carried out at room temperature, not 37°C, which may have reduced or prevented enzymatic activity | <urn:uuid:bcff0a52-25a6-4bfd-9964-379e861b5d80> | {
"date": "2014-09-01T21:59:09",
"dump": "CC-MAIN-2014-35",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1409535920694.0/warc/CC-MAIN-20140909055349-00486-ip-10-180-136-8.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9823988676071167,
"score": 2.546875,
"token_count": 186,
"url": "http://www.openwetware.org/wiki/User:Elizabeth_Ghias/Notebook/Experimental_Chemistry/2013/04/22"
} |
[Install the LifePage App to access full Talk]
What is Digital Forensic Investigation?
Santosh Khadsare: "Digital forensics (sometimes known as digital forensic science) is a branch of forensic science encompassing the recovery and investigation of material found in digital devices, often in relation to computer crime. Digital forensics investigations have a variety of applications. The most common is to support or refute a hypothesis before criminal or civil courts."
How I got into Digital Forensic Investigation?
Santosh Khadsare: "After completing my graduation, I did various courses in Digital Forensic Investigation and have been working since 2000 as the Investigator. Currently, I am heading the Digital Forensic Lab, Government of India, New Delhi."
[Install the LifePage App to access full Talk covering]
|1)||Digital Forensic Investigation|
|4)||Tools & Frameworks|
|7)||Human Resource Mgmt|
|9)||Passion & Patience|
|13)||Research & Development|
|15)||Field open for Everyone|
|17)||Work Life Balance|
|21)||Ransomware & Cryptocurreny|
|22)||Maintaining the Integrity|
|A Day Of:|
|23)||Digital Forensic Investigation|
Santosh Khadsare's LifePage:
LifePage Career Talk on Digital Forensic Investigationhttps://www.lifepage.in/career/20180405-0004/Science/Information-Technology/Career-in-Digital-Forensic-Investigation/english
Full Talk: https://lifepage.app.link/20180405-0004
(Digital Fornesic Investigation, Santosh Khadsare, Digital Forensic Lab, Head, Investigation, Cyber Crime, Cyber Laws, Digital Forensic)
[Install the LifePage App to access all Talks] | <urn:uuid:15283c4a-a61c-474a-ae5a-7e3d72495c68> | {
"date": "2019-12-07T06:32:22",
"dump": "CC-MAIN-2019-51",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540496492.8/warc/CC-MAIN-20191207055244-20191207083244-00096.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.7183208465576172,
"score": 2.546875,
"token_count": 395,
"url": "https://www.lifepage.in/career/20180405-0004"
} |
Naval operations during the Spanish-American War (1898-1901) served to convince President Theodore Roosevelt that the United States needed to control a canal somewhere in the Western Hemisphere. This interest culminated in the Spooner Bill of June 29, 1902, providing for a canal through the isthmus of Panama, and the Hay-Herrán Treaty of January 22, 1903, under which Colombia gave consent to such a project in the form of a 100-year lease on an area 10 kilometers wide. This treaty, however, was not ratified in Bogotá, and the United States, determined to construct a canal across the isthmus, intensively encouraged the Panamanian separatist movement.
By July 1903, when the course of internal Colombian opposition to the Hay-Herrán Treaty became obvious, a revolutionary junta had been created in Panama. José Augustin Arango, an attorney for the Panama Railroad Company, headed the junta. Manuel Amador Guerrero and Carlos C. Arosemena served on the junta from the start, and five other members, all from prominent Panamanian families, were added. Arango was considered the brains of the revolution, and Amador was the junta's active leader.
With financial assistance arranged by Philippe Bunau-Varilla, a French national representing the interests of de Lesseps's company, the native Panamanian leaders conspired to take advantage of United States interest in a new regime on the isthmus. In October and November 1903, the revolutionary junta, with the protection of United States naval forces, carried out a successful uprising against the Colombian government. Acting, paradoxically, under the Bidlack-Mallarino Treaty of 1846 between the United States and Colombia--which provided that United States forces could intervene in the event of disorder on the isthmus to guarantee Colombian sovereignty and open transit across the isthmus --the United States prevented a Colombian force from moving across the isthmus to Panama City to suppress the insurrection.
President Roosevelt recognized the new Panamanian junta as the de facto government on November 6, 1903; de jure recognition came on November 13. Five days later Bunau-Varilla, as the diplomatic representative of Panama (a role he had purchased through financial assistance to the rebels) concluded the Isthmian Canal Convention with Secretary of State John Hay in Washington. Bunau-Varilla had not lived in Panama for seventeen years before the incident, and he never returned. Nevertheless, while residing in the Waldorf-Astoria Hotel in New York City, he wrote the Panamanian declaration of independence and constitution and designed the Panamanian flag. Isthmian patriots particularly resented the haste with which BunauVarilla concluded the treaty, an effort partially designed to preclude any objections an arriving Panamanian delegation might raise. Nonetheless, the Panamanians, having no apparent alternative, ratified the treaty on December 2, and approval by the United States Senate came on February 23, 1904.
The rights granted to the United States in the so-called HayBunau -Varilla Treaty were extensive. They included a grant "in perpetuity of the use, occupation, and control" of a sixteen kilometer - wide strip of territory and extensions of three nautical miles into the sea from each terminal "for the construction, maintenance, operation, sanitation, and protection" of an isthmian canal.
Furthermore, the United States was entitled to acquire additional areas of land or water necessary for canal operations and held the option of exercising eminent domain in Panama City. Within this territory Washington gained "all the rights, power, and authority . . . which the United States would possess and exercise if it were the sovereign . . . to the entire exclusion" of Panama.
The Republic of Panama became a de facto protectorate of the larger country through two provisions whereby the United States guaranteed the independence of Panama and received in return the right to intervene in Panama's domestic affairs. For the rights it obtained, the United States was to pay the sum of US$10 million and an annuity, beginning 9 years after ratification, of US$250,000 in gold coin. The United States also purchased the rights and properties of the French canal company for US$40 million.
Colombia was the harshest critic of United States policy at the time. A reconciliatory treaty with the United States providing an indemnity of US$25 million was finally concluded between these two countries in 1921. Ironically, however, friction resulting from the events of 1903 was greatest between the United States and Panama. Major disagreements arose concerning the rights granted to the United States by the treaty of 1903 and the Panamanian constitution of 1904. The United States government subsequently interpreted these rights to mean that the United States could exercise complete sovereignty over all matters in the Canal Zone. Panama, although admitting that the clauses were vague and obscure, later held that the original concession of authority related only to the construction, operation, and defense of the canal and that rights and privileges not necessary to these functions had never been relinquished.
The Library of Congress Archives
Presented by CZBrats
November 16, 1998 | <urn:uuid:2a650e8c-bdeb-4f2d-b3de-0913f82bff1d> | {
"date": "2014-10-31T09:39:18",
"dump": "CC-MAIN-2014-42",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1414637899531.38/warc/CC-MAIN-20141030025819-00201-ip-10-16-133-185.ec2.internal.warc.gz",
"int_score": 4,
"language": "en",
"language_score": 0.9528934359550476,
"score": 3.765625,
"token_count": 1061,
"url": "http://www.czbrats.com/treaty77/1903.htm"
} |
The research, part of the larger UNC-based National Study of Youth and Religion, revealed a statistical association between religion and better behavior only among teens who went to religious services at least once a week, however, or who professed deeply held spiritual views, said study director Dr. Christian Smith.
Few associations were found among adolescents who attended irregularly or who said religion was only modestly important.
"We found that kids who go to church regularly or who say that religion is important in their lives are much less likely to be involved in various forms of substance abuse, get into trouble, commit crimes, are less involved in violence, have school problems and have difficulties with their parents," said Smith, professor of sociology at UNC. "They are more likely to behave safely, try to stay healthy and be involved in volunteering, sports and other community activities.
"Our findings are not radically surprising in that they support some earlier, smaller-scale work on this issue, which is something to which not much attention has been paid by most academics," he said. "For example, in the past, people who study adolescents have often neglected or completely ignored the religious factor in teen-agers' lives."
Some social science investigators have even assumed the religion had no effect or had a pernicious influence on teens, Smith said.
Conducted with doctoral student Robert Faris, the UNC study relied on data gathered through Monitoring the Future, a nationally representative University of Michigan survey of 2,478 high school seniors, he said. The new work, released in a report today (Sept. 18), is among the most comprehensive looks yet on the link between religion and positive and negative adolescent behavior.
"One of the most interesting observations is that the religious correlation doesn't seem to kick in until it reaches the level of the most religious kids," Smith said. "That suggests a threshold below which there's little or no association."
The sociologist said he wanted to be clear that the study revealed the constructive linkages without showing yet what caused what. He and others cannot separate the effects of what morals children were taught through religious traditions, for example, from possible effects of being part of social networks that include adult role models. A paper Smith will publish next year will offer nine different hypotheses, or possibilities, about how religion may influence adolescents.
"It could also be that kids who are initially religious and start getting into trouble drop out of religion because it feels uncomfortable for them," he said. "Then when someone takes a survey, those teens show up as being not very religious, and so there is an apparent association."
Among specific findings were that especially religious youths were less likely to smoke, drink and use drugs and more likely to start later and use less if they started at all, he said. They went to bars less often, received fewer traffic tickets, wore seat belts more, took fewer risks and fought less frequently. Shoplifting, other thefts, trespassing and arson also were rarer.
"Religious 12th--graders argued with parents less, skipped school less, exercised more, participated more in student government and faced fewer detentions, suspensions and expulsions," Smith said. "These findings were statistically significant even after we controlled for race, age, sex, region, education of parents, the number of brothers and sisters and other factors."
Lilly Endowment Inc. is funding the four-year UNC project, which began in 2001. Among goals are to identify effective practices in the religious, moral and social formation in young people's lives and to foster informed national discussions about the influence of religion on adolescents.
Note: To reach Smith or for copies of the report, call Roxann Miller, director of communications for the National Study of Youth and Religion at 919-966-1559. More information is available at www.youthandreligion.org.
AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert! system. | <urn:uuid:eccd9fc2-2f69-4fd1-90d5-82391999fb53> | {
"date": "2014-03-10T23:31:02",
"dump": "CC-MAIN-2014-10",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394011044030/warc/CC-MAIN-20140305091724-00006-ip-10-183-142-35.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9778023958206177,
"score": 2.5625,
"token_count": 833,
"url": "http://www.eurekalert.org/pub_releases/2002-09/uonc-nus091702.php"
} |
owls...A variety of owls may depend on a single prey species when it becomes exceptionally abundant. Prey is generally swallowed whole, and indigestible material, such as feathers, fur, and bones, are regurgitated in the form of a compact pellet.
pigeons...young, called squabs, beg for food by pushing at the parent’s breast, at the same time emitting a squeaky hunger note. They insert their bills in corners of the parent’s mouth and are then fed by regurgitation. Although a pigeon is capable of rearing an artificial brood of three young, only two squabs can be fed at a time, and natural broods of three are extremely rare. In several species so...
Simply begin typing or use the editing tools above to add to this article.
Once you are finished and click submit, your modifications will be sent to our editors for review. | <urn:uuid:7f6a8173-25e9-49cb-bbd7-dbe013c767c5> | {
"date": "2015-07-06T23:25:42",
"dump": "CC-MAIN-2015-27",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375098849.37/warc/CC-MAIN-20150627031818-00016-ip-10-179-60-89.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9277331829071045,
"score": 3.203125,
"token_count": 191,
"url": "http://www.britannica.com/science/regurgitation"
} |
Professor Emerita, Biology
Life History, Ecology and Conservation Biology of Endemic Hawaiian Birds, Island Biology
Life history, ecology and conservation of Hawaiian birds have been the focus of my research for the last 45 years. Past research has included studies of altitudinal distribution of birds in relation to environmental factors, ornithological surveys of large natural areas, avian census techniques, and a variety of natural history subjects such as plant and invertebrate distributions. More recently my work has included studies of geographic variation in morphology, genetics and behavior of three species of endangered passerines in the Northwestern Hawaiian Islands and how birds were used in Hawaiian material culture. My graduate students undertake research on these topics, as well as plant-pollinator interactions, conservation biology of rare plants and invertebrates, impacts of alien species on native Hawaiian biota and seabird behavior and conservation.
Hawai'i has lost more species than any other geographic area on Earth. This extinction crisis continues today, as biologists and managers work to prevent the loss of hundreds of endangered species, including more than 30 birds, 300 plants and at least 150 species of invertebrates. My research and that of my students has sought, among other things, to document the ecologies of some of these threatened species and, through basic ecological research, to provide a sound biological basis for effecting their recovery. We are also interested in environmental conflict resolution and how interactions between science, policy and management affect natural resource conservation.
Lepczyk, C.A., N. Dauphine, D.M. Bird, S. Conant, R.J. Cooper, D.D. Duffy, P.J. Hatley, P.P. Marra, E. Stone, S.A. Temple. 2010. What conservation biologists can do to counter trap-neuter-return: Response to Longcore el al. Cons. Biol. 24:627-629.
Yeung, N. W., D.B. Carlon, and S. Conant. 2009. Testing subspecies hypothesis with molecular markers and morphometrics in the Pacific white tern complex. Biological Journal of the Linnean Society 98, 586-595.
Rauzon, M, and S. Conant 2009. Seabirds. In: Gillespie, R. and D. Clague (eds.) Encyclopedia of Islands. Univ. of Calif. Press. | <urn:uuid:133b6f2c-d1a9-4090-920b-3688c98d0d0b> | {
"date": "2019-11-17T20:44:28",
"dump": "CC-MAIN-2019-47",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496669276.41/warc/CC-MAIN-20191117192728-20191117220728-00056.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.8901398777961731,
"score": 2.515625,
"token_count": 493,
"url": "https://manoa.hawaii.edu/biology/people/sheila-conant"
} |
Because many airline incidents occur when pilots and crewmembers misunderstand one another. Consistent phraseology, especially for particularly critical communications at critical times like takeoff and landing or when resolving a problem can save time by making sure crew members understand each other perfectly.
Why do airlines and training professionals insist on a “sterile cockpit?”
This has a similar reason. The fewer things a crewmember needs to pay attention to during critical stages of flight, the more likely they are to see and respond to things that are most important.
Captain David Santo and private pilot Paula Williams discuss this below.
Paula Williams: Can we do some examples of consistent phraseology? Different ways that maybe two instructors do things that have caused problems or something like that?
David Santo: Well, there is a lot. Communication, especially the English language, there’s so much ambiguity in the English language that it’s very easy to have misunderstandings or misinterpretations. If you think about all the different dialects within the United States, and then you think of all the different dialects amongst English speaking countries.
And then you think, some of our students may be foreign students where this is not their primary language. It is so easy to have ambiguity and how do we avoid that? So, in the airlines what we’ve done is during critical phases of flight, we are very scripted. And we are scripted so that if there’s any communication that’s made that’s not part of our script, it should send a red flag of warning.
Why is this communication not what I was expecting? How does it impact the proper conduct of the flight?
Paula Williams: Yeah.
David Santo: There’s some urban legends, if you will, of things like two-crew airplanes rolling down the runway. First officer is having a really tough day. Captain looks over at him and says, cheer up. And the first officer heard gear up, and reached over and pulled the gear handle up before the aircraft was fully in the air.
Paula Williams: That’s awful.
David Santo: There’s the story about airplanes being on final approach. And in the three-crew airplanes, now pretty much bygone airplanes, but in those airplanes the engineer would set the thrust levers. And the captain says take off thrust, so the engineer took the thrust to idle. What the captain wanted was full takeoff rated thrust, he wanted full power.
So he wanted just the opposite of the response that the engineer gave him. That’s ambiguity, right? We can’t have that. Ambiguity in an airplane and miscommunications in an airplane can lead to really bad outcomes. So in the airline industry we talk about a term called threat in air management.
And in the threat in air management model we look at different defenses to prevent errors, trap errors before they put us into an undesirable state. And one of those defenses that we can put in place is standardized phraseology, especially for critical phases of flight. Like, for example, at your flight school, well if you’re flying a complex single engine or twin engine airplane.
Is it gear up, is it wheels up? What is the exact terminology that we’re going to use so that they cannot be misunderstood? Is it flaps to 3, flaps to 5, flaps to 15, or is it flaps to 1 or flaps to 0? Just whatever that call-out is going to be, is it consistent, right?
And there’s 100 ways to do something, there’s 100 different ways to accomplish this. The big picture is, it doesn’t matter what you say as long as it’s consistently reinforced across everybody who’s in your airplanes that this is the meaning of when I say this.
Paula Williams: Right.
David Santo: I’ll give you another example of consistent phraseology that has always resonated to me.
And I might misquote some of the facts but the overall story will be correct. Many years ago there was a 737 operating out of what’s now Reagan National Airport. And for an airline called Air Florida, and I believe the call sign for that airplane was Palm 90. During the takeoff role, the first officer made several, my memory serves, it was like five, different non-standard call outs.
What he was trying to tell the captain, we believe, in the postmortem hindsight, is that the engines didn’t look right to him. The engine thrust settings, the gauges which is what we use on that airplane to set thrust, was indicating that the power setting they needed, but none of the other gauges were supporting that.
All the other gauges were showing way low on power. And in fact he was right, the engines were not making enough power, the aircraft did get into the air, but then stalled. And crashed into the Hudson River that was iced over. And there’s still pictures of the helicopters pulling the passengers and the flight attendants out of the submerged wreckage of that airplane.
Had that captain fully embraced and understood the philosophy behind a sterile cockpit. What is sterile cockpit? Any non-scripted phraseology during a critical phase of flight should raise a red flag of warning. Now, it’s easy to Monday morning quarterback. Okay, so I’m not in any way trying to pick on these guys. But, from a learning lesson, what they can teach us, is that had the captain said I don’t understand why you’re making these non-standard call outs, so I’m going to abort the take off.
What they would’ve discovered, had they gone back to the gate, is that their engine probes were iced up. And because they were iced up, they weren’t reading accurately. Now, that’s what we call an error chain, but consistently, if you look at the cause of most aircraft accidents and incidences, at some point, they’re related back to human factors.
David Santo: And that’s a very nice way, in most cases, to say human interaction. If there’s two of us in the airplane, a flight instructor and a student, You’ve got two people that have different visions of the situational awareness, different levels of awareness. And both of them should be working together as a team to make sure that we keep the aircraft and its occupants safe.
Great. So consistent phraseology is really, really important to that.
Paula Williams: So it sounds like having two, the whole purpose of having two pilots assumes that you’re communicating well. When you’re not communicating well, if you’re not using consistent phraseology, then having a crew there almost impedes the safety of the flight, rather than adds to it.
Because of that communication factor, that possibility of things getting in the way of what somebody already knows is not a good situation or already knows isn’t the way things should be and impacts their decision making in a bad way rather than in a good way.
David Santo: Well and that’s absolutely right.
So you look at years ago, Northwest Airlines also a bygone airline that’s now part of Delta. They lost an aircraft in Detroit, and during the taxi out they were really distracted by an aircraft that had landed with the call sign of water ski. And they were talking about water skiing, the commute, that had nothing to do with their flight, right?
So, what is sterile cockpit? No extraneous conversation during critical stages of flight except those duties required for the safe operation of the flight. So the focus has to be on proper conduct of the flight. Now, in a flight instructor situation, we can’t say hey the flight instructor can’t be talking to the student during the taxi out and during critical phases of the flight, but the conversation needs to be pertinent.
Paula Williams: Mm-hm.
David Santo: And clear, unambiguous, using advocacy statements and inquiry questions to make sure that there’s a good feedback loop, that we’re hearing what was intended to be transmitted, and we’re repeating that back to make sure that we got it right.
Paula Williams: Mm-hm.
David Santo: So that we have a very precise, even though it’s flight instruction and we’re talking during critical phases of flight, it’s still very precise. We’re not talking about cars and girls and airplanes and boats and any of the other many things that can distract us. We have to stay focused on what’s at hand.
Ready to learn more? | <urn:uuid:1345aa71-6df4-47a1-893b-2973b6f6cf40> | {
"date": "2018-06-20T20:54:23",
"dump": "CC-MAIN-2018-26",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267863886.72/warc/CC-MAIN-20180620202232-20180620222232-00376.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9662951827049255,
"score": 2.734375,
"token_count": 1848,
"url": "https://airlinepilotgateway.com/2016/11/apg-radio-00014-consistent-phraseology-sterile-cockpit/"
} |
This is from a water resources paper that I'm reading that uses Bayesian inference. It is a simple problem, but I don't know if I am missing something.
Assume that you have a vector of imperfect observations Xo of a true unknown variable Xt and that the errors e are additive:
Xo = Xt + e (EQ 1)
I would like to compute the conditional probability p(Xo|Xt) for the case that the variable is exact.
The solution is given in the paper as:
p(Xo|Xt) = e (Xo - Xt) (EQ 2)
So according to Bayes Theorem:
p(Xo|Xt) = p(Xt|Xo) * p(Xo) / p(Xt) (EQ 3)
If the variable is exact I assume that it means that the following conditional probability is equal to one: given an observation Xo, the probability that it is equal to its true value Xt. That is:
p(Xt|Xo) = 1 (EQ 4)
That leaves me with the ratio of the priors:
p(Xo|Xt) = 1 * p(Xo) / p(Xt) (EQ 5)
I can substitute EQ 1 into EQ 5:
p(Xo|Xt) = 1 * p(Xt + e) / p(Xt) (EQ 6)
This is where I have problems. First of all, I know that there is a possibility of observing the Xo that gives as a result the corresponding true value Xt with perfect accuracy since the variable is assumed to be exact. But according to Bayes Theorem also exists the possibility that measuring not Xo (~Xo) would result also in the true value Xt, that is: a false positive? In other examples it is easier for me to understand the concept of ~Xo, but what does it really mean that we are observing ~Xo rather than Xo? Secondly I know that the distribution of the sum of two random variables (such as in the numerator of EQ 6) is a convolution of the individual pdfs and therefore depends on the statistical distribution of the original variables. Is this how this should be solved or the solution is simpler?
I appreciate any help you can provide me.
At first, that's what I thought that the term "exact" meant, but note that the solution in the paper is not zero, e is still different than zero. The way I interpret that a variable is exact is that there is a one to one correspondence between certain observed values Xo with their true value Xt, but this doesn't invalidate the fact that there are also certain cases (the probability is not zero) when we observe something different than Xo and we still get the correct aswer Xt. I realize that this second part may sound confusing and perhaps is wrong, but that's how I'm trying to interpret the problem.
Yes, the way I understand this is that there is a probability that we observe the value Xo that results in Xo=Xt. Since the variable is exact I would assume that the probability of observing Xo is then equal to the probability that e=0 (which in this case is not 1).
But also there is a posibility that we don't observe Xo, but other value ~Xo. A fraction of the other values not equal to Xo (e.g. ~Xo) result in Xt.
It is easier for me to understand or explain this idea using a conceptual pie chart:
Public - Windows Live
In the chart shown the yellow exploded slice represents the probability that we observe the value Xo that is equal to Xt and therefore for this case e=0. The rest of the pie (red+magenta) is the probability that we observe something different than Xo (~Xo). Within this probability the magenta region corresponds to the probability that we observe ~Xo, but still get the correct answer Xt. The red region therefore represents the probability that we observe ~Xo but we get ~Xt.
As you can see from this graph we can derive another form of the Bayes Theorem:
p(Xo|Xt) = p(Xt|Xo)*p(Xo) / (p(Xt|Xo)*p(Xo)+p(~Xo)*p(Xt|~Xo))
in other words we get the expression that we have before:
p(Xo|Xt) = p(Xt|Xo)*p(Xo) /p(Xt)
The authors mention they selected the additive model thinking in using normally distributed errors (probably with zero mean) later on. However, nothing in their analysis up to this point has mentioned this assumption. | <urn:uuid:4242261d-3840-4522-b717-35119a8b5f88> | {
"date": "2016-10-28T19:59:53",
"dump": "CC-MAIN-2016-44",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988725470.56/warc/CC-MAIN-20161020183845-00057-ip-10-171-6-4.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9255460500717163,
"score": 2.71875,
"token_count": 1034,
"url": "http://mathhelpforum.com/advanced-statistics/99604-application-bayes-theorem.html"
} |
The collections of the Mawangdui Han Tomb are gems. The tombs were unearthed from Mawangdui in the east suburb of Changsha city near Liuyang River between 1972 and 1974. The female corpse excavated form the No one Han Tomb has a history of over 2,100 years. It is well preserved and similar in condition to a fresh corpse: the entire body is lenitive, some of the joints can be moved, and the parenchyma remains flexible. It is different from a mummy or cadaver tanned in peat hog, and represents a miracle in antisepsis. When the tomb was first discovered, it shocked the world and has attracted many scholars and tourists. After anatomization, the body and entrails of the corpse are on display in a specially designed basement.
More than 3,000 relics have been unearthed from the three tombs, most of which are well preserved. There are 500 pieces of lacquer works, which are delicate, luxurious, and shiny as new. The abundant silk products from No 1 tomb are quite precious, and include well-preserved thin silk, silk, yarn, brocades, and so on. One of the most outstanding specimens is a silk coat as light as the mist and as fine as gossamer. It is 1.28 meters (about 1.40 yards) in length and comes with a pair of long sleeves, but weighs only 49 grams. The silk paintings are the earliest works discovered that describe daily life at that time. There are also color potteries, silk books, weapons, musical instruments, seals, and so on.
The lacquered surface of the coffin excavated from No 1 tomb is decorated with unique images of animals and gods, and has relatively high artistic value. The numerous silk books unearthed from No 3 tomb represents precious historical literature material. The books cover ancient philosophy, history, technology, and so on. There are 28 different types of books totaling 120,000 words. There are also several books with illustrations, most of which are lost ancient books.
A map excavated from No 2 tomb provides another surprise. Its drawing technique is very advanced, and the place marks are similar to that on a modern map. Foreigners praised it as ‘a striking discovery’ when exhibited in America, Japan, Poland, and many other countries.
The Wanwanghui Han Tomb site is located in the east suburb of Changsha City, five kilometers from the center of the city. According to legend, it is the tomb of Ma Yin, the King of Kingdom Chu, and was thus named Mawangdui. Among the three Han tombs, No 1 tomb belonged to the wife of Li Cang, and No 2 tomb was for the son of Li Cang, Marquis Quan Li Cang, administrator of Changsha at the beginning of the Han Dynasty.
Of the three tombs, No 1 tomb is the largest at 19.5 meters long from north to south, 17.8 meters from east to west, and 16 meters deep. Now, No 1 and 2 tombs have been blocked, and No 3 tomb has been preserved after being reinforced. A new cover was built to make it convenient for people to visit the tomb.
Mawangdui Han Tomb has become a treasure of China, and tourists from abroad often mention it and Qin Terra-Cotta Warrior and Horse Figurines in the same breath.
The reconstruction of the museum lasts from 2012 to 2015. During these three years, the museum is not available for visiting. | <urn:uuid:8afaa429-c4a5-4fb1-bbf0-0fc5059b2083> | {
"date": "2016-06-24T20:19:57",
"dump": "CC-MAIN-2016-26",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.0/warc/CC-MAIN-20160624154951-00056-ip-10-164-35-72.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9648194909095764,
"score": 2.96875,
"token_count": 731,
"url": "http://www.chinahighlights.com/changsha/attraction/mawangdui-han-tombs.htm"
} |
Leo Baeck was born in Lissa (now Leszno, Poland), in the then German province of Posen on May 23, 1873, the son of a Rabbi. After attending the conservative Jewish Theological Seminary in Breslau (now Wroclaw, Poland), he moved to Berlin to study at the more liberal Lehranstalt für die Wissenschaft des Judentums in Berlin. By 1897 he had secured his first post as rabbi in Oppeln (now Opole, Poland).
In Oppeln, Baeck made his mark as an intellectual and a modern theologian in, with the publication of Das Wesen des Judentums (“The Essence of Judaism”) in 1905. Written in response to Adolf von Harnack’s Das Wesen des Christentums (“The Essence of Christianity”), the book is a passionate argument for the enduring relevance of Judaism. Rather than the cult based on outmoded rituals and laws that Harnack saw in Judaism, Baeck located the essence of Judaism in the intersection between rational ethics and a personal experience of the divine. The commandment to search the scriptures for ethical principles, he argued, made Judaism an evolving, perpetually modern tradition of critical thought.
A humanist, a scholar, and a modern theologian, a man deeply versed in both rabbinical study and Western culture, a Feldrabbiner during World War One, Baeck was irreversibly committed to the cause of Jewish life in Germany. In many ways he symbolized the delicate, fertile symbiosis of Jewish and German thought that characterized the years before Hitler’s Reich. A stoic, Baeck remained at his post as the civilization he loved was shredded. He was a reluctant interlocutor with the Nazis from their rise through the Final Solution, a stance for which he was rewarded with dispatch to the Theresienstadt concentration camp.
–Roger Cohen, from Leo Baeck Institute at 50, 2005.
In 1912 Leo Baeck was called to Berlin, where he worked both as a rabbi at the large synagogue on Fasanenstraße as well as a lecturer at the Hochschule für die Wissenschaft des Judentums.
A patriot who was committed to the cause of Jewish life in Germany, Baeck emerged as an important symbolic and political leader of German Jewry. During the First World War, Baeck served as a chaplain (Feldrabbiner) in the German Army. In 1918 he returned to Berlin and worked at the Prussian Culture Ministry as an expert in Hebrew. In addition to his position as a rabbi and his lecturing at the Hochschule, Leo Baeck also became President of the Union of German Rabbis (Allgemeiner Deutscher Rabbinerverband) in 1922. He was elected President of the German B’nai B’rith Order in 1924. At this time Baeck also joined the Central-Verein deutscher Staatsbürger jüdischen Glaubens, and the Jewish Agency for Palestine.
When the Nazis rose to power in 1933 Leo Baeck was elected president of the Reichsvertretung der deutschen Juden, an umbrella organization of German-Jewish groups founded to advance the interests of German Jewry in the face of Nazi persecution. The organization was forced to change its name to the Reichsverband der Juden in Deutschland in 1935 to reflect the Nazi view that there were no “German Jews” but only “Jews in Germany.” As the head of this organization, Baeck worked to maintain the morale of German Jews and alleviate the discrimination and persecution of the Jews by the National Socialists. Under Baeck, the organization also helped Jews emigrate from Germany.
In spite of several offers of emigration, Leo Baeck refused to leave Germany or his community, even after Jewish businesses and synagogues (including his home congregation at Fasanenstrasse) were burned and looted in November 1938. He is reported to have said that he would only leave Germany when he was the last Jew remaining there. He remained the nominal president of the Reichsverband when it was placed under Nazi control and renamed the Reichsvereinigung der Juden in Deutschland. When this organization was finally disbanded in 1943, Leo Baeck, along with his family members, was sent to the concentration camp at Theresienstadt (Terezin) at the age of seventy.
During his time in Theresienstadt, Leo Baeck continued to teach, holding secret lectures on philosophy and religion in the barracks of the camp. In spite of being forced to perform hard labor, he also managed to begin a manuscript that would later become Dieses Volk – Jüdische Existenz, (“This People Israel: The Meaning of Jewish Existence”) an interpretation of Jewish history. The camp was liberated in May 1945 by the Red Army. None of Baeck’s four sisters survived Theresienstadt.
After the liberation of the camp, Leo Baeck eventually made his way to England where his daughter Ruth resided. He received many citations and honors as a result of his efforts under the Nazis, and spent much of his next years travelling and lecturing, as well as writing and helping to found several organizations with the goals of assisting the remnants of European Jewry. He also reached out to the new Federal Republic of Germany and to Israel.
In 1955, a group of émigré German-Jewish intellectuals including Hannah Arendt, Martin Buber, Robert Weltsch, and Gershom Scholem met in Jerusalem to found an institute that would preserve the history of the German-Jewish culture. They named the Institute in Baeck’s honor and appointed him its first President. Although Leo Baeck died just over a year later, on November 2, 1956, he left an indelible imprint on the mission and work of the Leo Baeck Institute. | <urn:uuid:6cfd21b5-9d40-49eb-b8ce-9b75df6f67e2> | {
"date": "2013-05-25T12:54:47",
"dump": "CC-MAIN-2013-20",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705953421/warc/CC-MAIN-20130516120553-00000-ip-10-60-113-184.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.967679500579834,
"score": 3.0625,
"token_count": 1277,
"url": "http://www.lbi.org/about/leo-baeck/"
} |
A team of Australian researchers have produced a 400-year record of El Niño that shows dramatic changes in the phenomena, a task previously deemed ‘impossible’ by coral experts.
A team of Australian scientists have produced a 400-year record of El Niño weather events with a specialised method of analysis that uses cores drilled from coral reefs.
The paper published in Nature Geoscience shows the nature of these phenomena, which are a driver of extreme weather weather events across the globe, has changed over time and suggests that the streength of Eastern Pacific El Niños was likely to increase in future.
“We are seeing more El Niños forming in the central Pacific Ocean in recent decades, which is unusual across the past 400 years,” said lead author, Dr. Mandy Freund.
“There are even some early hints that the much stronger Eastern Pacific El Niños, like those that occurred in 1997/98 and 2015/16 may be growing in intensity,” she said.
“By understanding the past, we are better equipped to understand the future, especially in the context of climate change.”
At the heart of the novel method to identify seasonal patterns in El Niños was the understanding that coral records could contain sufficiently granular data to document such changes. This had never been done and had previously been labelled ‘impossible’ by experts in the field.
Dr. Freund took her proposed technique to a group of climate scientists and coral experts: Dr Ben Henley, Prof David Karoly, Assoc Prof Helen Mcgregor, Assoc Prof Nerilie Abram, and Dr Dietmar Dommenget.
The team worked to refine the process, which leverages machine learning to reconstruct El Niño events in time and space, and Dr. Freund found agreement in the results of this technique and existing instrumental records.
After three years, the team has found an unprecedented increase in the number of El Niños forming in the Central Pacific over the past 30 years, compared to all 30 year periods in the past 400 years, and that the stronger Eastern Pacific El Niños were the most intense El Niño events ever recorded.
The results also represent a world-first 400-year El Niño record, and a novel new methodology that may well form the basis for future climate research.
Stay up to date by getting stories like this delivered to your mailbox.
Sign up to receive our free weekly Spatial Source newsletter. | <urn:uuid:bdc6f52d-0408-4f92-9cf8-5c20e77fdc65> | {
"date": "2019-09-15T20:52:01",
"dump": "CC-MAIN-2019-39",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514572289.5/warc/CC-MAIN-20190915195146-20190915221146-00136.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9525994062423706,
"score": 3.390625,
"token_count": 513,
"url": "https://www.spatialsource.com.au/gis-data/aussie-researchers-crack-el-ninos-code"
} |
Welding can be a scary thing for the novice Chevy builder. While not difficult per se, if you've never used a welder before and your Bow Tie needs some new metal installed, it can be a daunting task.
The nice thing about our modern age (or at least compared to when cars were being restored in the '80s and early '90s) is that welding equipment is cheaper than ever to purchase, and the newer DIY-oriented welders have many automatic features that make the process even easier for a first-timer. Miller Electric's Millermatic series and Lincoln Electric's Power MIG series welders are great examples of user-friendly equipment featuring many automatic functions that make welding easy. Plus, these units come in at under $1,000-a real bargain.
On the electric side of welding, besides the commonly known arc welding, you have MIG (metal inert gas) and TIG (tungsten inert gas) welders. TIG welding is usually reserved for more experienced welders, so we're just going to focus on the MIG side.
MIG welding (also known as gas metal arc welding-GMAW) is a semi-automatic or automatic arc welding process where a spool of wire electrode and a shielding gas are fed through a welding gun. A constant voltage, direct current power source is most commonly used with MIG, but constant current systems, as well as alternating current, can be used. There are four primary methods of metal transfer in MIG, called globular, short-circuiting, spray, and pulsed-spray, each of which has distinct properties and corresponding advantages and limitations.
Developed in the '40s for welding aluminum and other non-ferrous materials, MIG was soon applied to steels because it allowed for lower welding time compared to other processes. The cost of inert gas limited its use with steels until several years later, when the use of semi-inert gases such as carbon dioxide became common. Further developments during the '50s and '60s gave the process more versatility, and as a result it became a highly used industrial process.
Today, MIG is the most common industrial welding process, because of its versatility, speed, and the relative ease of using the process in automated robotic welding. Car companies iuse MIG welding almost exclusively. Unlike welding processes that do not employ a shielding gas, it is rarely used outdoors or in other areas of air volatility. A related process, flux cored arc welding, often does not utilize a shielding gas, instead employing a hollow electrode wire that is filled with flux on the inside.
While working on our Project XS Chevelle and '55 Bel Air at Classic Automotive Restoration Specialists, we took a few minutes to make notes about some stuff that the first-time welder would find helpful before he starts performing surgery on his classic Chevy.
Before getting started on your car, the best thing you can do is get some scrap pieces of metal to play around with and use for getting familiar with the particular welder you've got. Scrap metal can help you hone your skills, experiment with different beading techniques, and allow you to play a bit and get a feel for the whole process of MIG welding.
It's really not that hard, and with some practice and patience, you can learn how to fix your car's metal and feel the satisfaction of doing the job yourself. You may even save a few bucks in the process. | <urn:uuid:c3a92a09-91aa-4f51-aaca-e6675a01661d> | {
"date": "2015-05-29T04:26:36",
"dump": "CC-MAIN-2015-22",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207929869.17/warc/CC-MAIN-20150521113209-00144-ip-10-180-206-219.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9573798179626465,
"score": 2.578125,
"token_count": 719,
"url": "http://www.superchevy.com/how-to/paint-body/sucp-1109-tips-on-welding-welding-basics/"
} |
Changes in the Strategies that Advertisers Adopted at Various Times
History TOPIC ID 22
1890 - 1920
Advertisement encouraged consumers to buy brand name products. An ad for Kellogg’s, the cereal maker, portrays an assertive woman telling her grocer: "Excuse me. I know what I want, and I want what I asked for, Toasted Corn Flakes. Good day." The product itself remained at the center of advertisements.
1920 - 1929
The 1920s was the decade during which the phrase “Madison Avenue” was first used to describe the advertising industry and in which many products are sold because they hold out the promise of a more modern and freer life, filled with exciting opportunities to consumer new products.
Some ads stressed that ordinary Americans could have the same products as the rich and the socially prominent. Others described natural products are superior to artificial products. Many ads for cars and refrigerators treated these products as objects worthy of worship by surrounding them with halos. Invented characters like General Mills' Betty Crocker and Philip Morris's little bellhop, Johnny helped consumers establish a personal connection with a particular product.
1930 – 1941
The Great Depression ushered in a heightened concern with thrift; but many ads also featured celebrities who promised that ordinary Americans could also be glamorous.
Scare campaigns were popular during the Great Depression. Thrift is increasingly treated as a virtue. Prices are increasing mentioned in ads. Men in overalls begin to appear in advertisements. Some products (such as soap) are sold as ways of ensuring employment.
1941 – 1945
During the war years, private businesses produced posters—like the famous portrait of Rosie the Riveter—as a way to remind consumers about their companies while demonstrating their support for the war effort. Image campaigns sought to associate companies with wartime concerns. Stetson hats promoted its product with the slogan “Keep it under your Stetson,” reminding the public about the importance of maintaining wartime secrecy. It was in 1942 that the Advertising Council, the industry's chief trade association, was founded; it was originally called the War Advertising Council. It was established as an adjunct of the Federal Office of War Information and sponsored public service ads, such as Smokey the Bear campaign that reminded the public that "Only You Can Prevent Forest Fires."
Much advertising emphasized the family and the theme of family togetherness, including such products as the family cars and the suburban home. The art in many ads became much more self-consciously artistic. | <urn:uuid:1aacf718-2f0d-4382-8098-a7e4bb4ff00d> | {
"date": "2014-11-22T17:40:17",
"dump": "CC-MAIN-2014-49",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-49/segments/1416400378446.58/warc/CC-MAIN-20141119123258-00124-ip-10-235-23-156.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9712881445884705,
"score": 3.296875,
"token_count": 519,
"url": "http://www.digitalhistory.uh.edu/topic_display.cfm?tcid=22"
} |
18-year-old Helena Muffly wrote exactly 100 years ago today:
Wednesday, April 2, 1913: About the same as the other days.
Her middle-aged granddaughter’s comments 100 years later:
I’d like to thank Kristin at Finding Eliza for sharing a link with me that I found fascinating and provided the inspiration for this post.
Since Grandma didn’t write much a hundred years ago today I’m going to write about an important issue both a hundred years ago and today: poor working conditions for garment workers.
On March 25, 2011 I wrote a post about the hundredth anniversary of the Triangle Shirtwaist Factory Fire in New York City that killed many workers. The public outrage over that fire led to many safety and labor improvements in the garment industry (and other industries).
To commemorate the 102nd anniversary of the Triangle Shirtwaist Factory fire on March 25, The Sewing Rebellion website included a downloadable pattern for the shirtwaist that was made by the Triangle Factory.
The Sewing Rebellion points out that many garment workers in other countries still work under very poor conditions, and encourages people to emancipate themselves from the global garment industry by learning how to alter, mend and make their own garments and accessories.
What goes around, comes around. It’s intriguing to think that instead of buying new clothes each season, maybe we could again learn how to make and alter our clothes. | <urn:uuid:b9a0f5e8-d202-4541-9ecf-6f40a1a15fb3> | {
"date": "2015-05-30T08:29:09",
"dump": "CC-MAIN-2015-22",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207930916.2/warc/CC-MAIN-20150521113210-00314-ip-10-180-206-219.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9531223773956299,
"score": 2.796875,
"token_count": 300,
"url": "http://ahundredyearsago.com/2013/04/02/remembering-the-past-to-improve-the-future-lets-make-alter-and-repair-our-own-clothes/"
} |
MacNeill, Janet Smith (Jennie Bahn)
By Nancy V. Smith, 1991
Janet Smith MacNeill (Jennie Bahn), subject of North Carolina legend, was born in Scotland, the daughter of John, a lowland Scot, and Margaret Gilchrist Smith. The Smiths migrated to the colonies about 1739 and settled in the region that became Harnett County, N.C. Margaret Gilchrist died on the voyage to America and John Smith died sometime before 1754.
A contemporary of Flora MacDonald, Janet Smith was well known to her Scottish neighbors as a spirited, attractive young woman. Traditionally, she is said to have been small, redheaded, and fair complected. Her neighbors nicknamed her "Jennie Bahn," meaning Jennie the Fair.
Jennie Bahn and her husband, Archibald MacNeill, were said to be the largest cattle raisers in America before the Revolution. One of the earliest and most famous legends surrounding Jennie Bahn has her regularly driving 3,000 head of cattle from Cross Creek to Philadelphia. Because it was impossible to take enough feed for a herd this size, much less buy it during the long journey to Philadelphia, this legend has been refuted. It is known, however, that she would occasionally help drive a herd of around 1,500 to Petersburg, Va. According to one story, on one trip she tried to buy feed from a Virginia farmer but he refused to sell it to her. Not to be outdone, she let her cattle inside his fences to graze. It is also known that Jennie Bahn did visit Philadelphia, where she met Benjamin Franklin. She was so impressed by Franklin that there has been a Benjamin Franklin in the MacNeill family and collateral families since that trip.
Another legend concerns her original, though inaccurate, surveying techniques. She would take a slave to a tract of land and send him walking until he heard her bell. At the clang, he would change direction. Her neighbors did not like her methods of surveying and accused her of infringing on their land. She was never taken to court for these infringements, however, because she wisely patented the tracts under the names of her husband and children. Her name never appears on the records at the land grant office in Raleigh or on the records of the Fayetteville courts.
As the driving force in her family, Jennie Bahn is said, at the start of the Revolution, to have divided her six sons so half would serve the king and the other half would serve the cause for independence. She remained neutral in order to sell cattle to both sides. This way the MacNeill family could brag about its sons no matter which way the war was going and make money at the same time. Actually, five of her six sons served with Loyalist forces. Of these five, "Nova Scotia" Daniel and "Leather Eye" Hector were known as outstanding Tory leaders, and "Cunning" John led his troops in the on-slaught at the Massacre of Piney Bottom. As for Jennie Bahn, it is said that she regarded the British troops stopping by her home with the utmost distaste.
Jennie Bahn married "Scorblin'" (scrubbling) Archibald MacNeill sometime before 1748. They had seven sons and two daughters. After the war Jennie Bahn and Archie MacNeill moved to their home in Cumberland County on the lower Little River in the Sandhills. They were buried together in the nearby MacNeill cemetery. The final legend surrounding Jennie Bahn comes after her death. Her tombstone is said to have been so heavy that it was 125 years before it was taken from Fayetteville and placed on her grave.
Malcolm Fowler, They Passed This Way (1976)
Ben Dixon MacNeill, "Highland Family Comes Home to Celebrate" (clippings, North Carolina Collection, University of North Carolina, Chapel Hill)
John Oates, Story of Fayetteville (1972)
1 January 1991 | Smith, Nancy V. | <urn:uuid:9acd5dd2-c375-47e2-b899-5c11b8b1e3f5> | {
"date": "2019-07-15T18:22:26",
"dump": "CC-MAIN-2019-30",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195523840.34/warc/CC-MAIN-20190715175205-20190715200239-00050.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9814642071723938,
"score": 2.75,
"token_count": 836,
"url": "https://www.ncpedia.org/comment/6181"
} |
The oppressive heat that rolls over the city each summer is as tangible as the heat waves rising up from the concrete.
"The big influence during the summer months is the moisture that comes in off the Gulf of Mexico," explained WAFB Chief Meteorologist Jay Grymes.
While higher temperatures are likely to come as the summer progresses, medical personnel will see an increase in heat-related emergencies at the start of summer when residents are not as acclimated. Heat hits the old, the young and the furry especially hard.
"It can be as simple as some muscle cramps, usually in the legs or the abdomen area. That can progress to having paler, cool skin and eventually lead to where you eventually stop sweating and that would be most serious case which would be a heat stroke," explained Nick McDonner with EMS.
"Dogs actually can't sweat. The only way for them to dissipate heat or cool off is through panting. So, putting them in the sun or having them outside, if they get overheated you need to make sure there is adequate water source and shade," advised veterinarian Dr. Andrea Anderson.
The body's first defense against heat is sweat. As sweat evaporates, it takes the heat in our skin with it.
"Unfortunately down here, the air is so humid that the evaporation doesn't occur very effectively," said Grymes. "What happens is you build up that sweat on your skin and that actually becomes an insulating layer. So instead of cooling you off, perspiring down here tends to heat you up even more."
That's why staying cool in the shade and hydrated with water are especially important in our area.
McDonner also warned about the dangers the lie inside hot cars. According to EMS, the temperature in a car can rise 20 degrees in less than 10 minutes.
It can only take a few minutes for your body to overheat, especially if you are not accustomed to the high temps. If you experience muscle cramps, dizziness or you stop sweating get medical help immediately.
Also, avoid drinks with high sugar, caffeine or alcohol which can actually lead to dehydration.
Copyright 2013 WAFB. All rights reserved. | <urn:uuid:5a0b9a8b-8d53-4cc8-bab1-9c56886f52e9> | {
"date": "2018-08-17T01:14:42",
"dump": "CC-MAIN-2018-34",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221211403.34/warc/CC-MAIN-20180817010303-20180817030303-00056.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9674511551856995,
"score": 2.828125,
"token_count": 454,
"url": "http://www.wafb.com/story/22575441/summer-temps-mean-heat-dangers"
} |
Reading Time: 3 minutes
Emergency contraception can take two forms: either a high dose of oral contraceptive (such as the “morning-after” pill or the newly approved ella) or an intrauterine device (IUD). Oral contraceptives are used most often.
How it works
Emergency contraception (EC) is offered to women within 72 hours after unprotected sex for the “morning-after” pill or 5 days after for ella. If taken within the first 72 hours after intercourse, the “morning-after” pill will be only about 75% effective. Studies have shown that ella can decrease the chance of pregnancy by about two-thirds for at least 120 hours after unprotected sex. According to Planned Parenthood's explanation of emergency contraception, “It prevents pregnancy by stopping ovulation, fertilization, or implantation. It will not affect an existing pregnancy. And it will not cause an abortion.” However, because life begins at fertilization–not implantation—this description is deceptive. EC can prevent the already existing human embryo from implanting into the uterine wall, thereby forcing the woman to expel her embryonic baby via chemical abortion.
Determining whether fertilization has occurred this early is difficult, so women take EC without certainty of a pregnancy. Aside from the medical risks and social implications, this uncertainty dictates that the Right to Life movement cannot support the use of EC.
Dangers of EC
In addition to EC causing a chemical abortion, this powerful drug endangers women and girls. Recent medical studies have proven that intensified doses of hormones (such as those used in hormone replacement therapy) are harmful to women. Scientists found that these high doses of artificial hormones, mainly progestin (the hormone used in emergency contraception), lead to greater risk for breast cancer, stroke, and heart attack. Planned Parenthood acknowledges only a few of the negative effects on its website: most women will experience nausea and sometimes vomiting, which should pass after about 24 hours. Planned Parenthood masks the dangers of taking drugs without the counsel of a doctor.
On August 13, 2010, the FDA approved the use of ella as a controversial form of emergency contraception. Currently, women and girls still need a prescription to obtain ella.
The “morning-after” pill is available at most pharmacies across the country. Women (and men) age 17 and older can obtain this pill without a prescription (under 16 will need a doctor’s prescription). Minor girls are still at risk, however, since individuals who are 17 or older can purchase the pills without question. Conceivably, a sexual predator could purchase the pills and pass them along to a minor girl. Parental involvement should be necessary in regard to the health decisions of minor daughters; this safeguard would protect these young girls from sexual predators. (The teenage girl’s sexual partner is an average of 6 years older than she is.)
Removing the doctor’s expertise, his familiarity with his patient, and his guidance on medication by allowing women to self-administer this powerful drug puts women at risk. In no other medical situation could a patient autonomously determine that a drug as powerful as the “morning-after” pill is an appropriate course of action and then procure the drug for herself without a prescription. Because the “morning-after” pill can cause a chemical abortion, the basic health standards and requirements for its usage (i.e., doctor supervision, warning labels, patient information, contraindications, etc.) should not be ignored or lowered to a level that further endangers women. | <urn:uuid:de019a43-2e18-476a-85eb-93d4ebd470ef> | {
"date": "2019-03-23T00:32:41",
"dump": "CC-MAIN-2019-13",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912202704.58/warc/CC-MAIN-20190323000443-20190323022443-00496.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9209313988685608,
"score": 2.765625,
"token_count": 734,
"url": "https://www.texasrighttolife.com/emergency-contraception-2/"
} |
GLIMPSE is a 5-year project supported by the Leverhulme Trust based in the Glaciology Group at the School of the Environment and Society, Swansea University.
To improve predictions of the future impact of the Greenland ice sheet on the Earth system through high quality, collaborative research using fieldwork, remote sensing and modelling studies. This research will lead to a data legacy for future modelling and a knowledge base that will be disseminated to the academic community, policy makers and the general public.
Swansea Glaciology and the GLIMPSE Project appear on the big screen! Watch a 9 minute clip of A Glimpse of Greenland: The Disappearing Ice. | <urn:uuid:839dd3da-6ef8-429f-b3fe-25de05329d59> | {
"date": "2014-12-22T16:18:35",
"dump": "CC-MAIN-2014-52",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802775404.88/warc/CC-MAIN-20141217075255-00043-ip-10-231-17-201.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.8717449307441711,
"score": 2.5625,
"token_count": 138,
"url": "http://www.swansea.ac.uk/glimpse/"
} |
The federally listed California red-legged frog received some good news recently: A portion of its riparian habitat—shaded by many old sycamore, oak, and bay trees—won’t be developed, thanks to Save Mount Diablo (SMD). Having disappeared from more than three-quarters of its historic range, today this native frog—once the most populous in the state—is found in fewer than 250 streams across California. Recently, SMD optioned a 20-acre parcel along Morgan Territory Road, adjacent to Mount Diablo State Park and just across from Morgan Ranch, previously purchased by SMD for transfer to the state park. The new parcel includes a section of Marsh Creek, which is important red-legged frog habitat. Owned by a descendent of Jeremiah Morgan, for whom the Morgan Territory Regional Preserve east of Mount Diablo is named, the land is known as the “red corral” for its red cattle chute and fences. To purchase this property for the park, SMD will need to raise $240,000 by the end of 2002. You can contact Save Mount Diablo at (925) 947-3535 or visit www.savemountdiablo.org. Fall is a spectacular time to visit this remote eastern region of the East Bay. The Mount Diablo Interpretive Association sponsors informative walks and hikes throughout the fall as part of its Autumn-on-the-Mountain events calendar. Call (925) 837-6119 or visit www.mdia.org/events.htm.
Like this article?
There’s lots more where this came from…
Subscribe to Bay Nature magazine
Most recent in Wildlife: Invertebrates, Reptiles, Amphibians
Sea snails flee from predators. A new research paper suggests that ocean acidification impairs that ability.
Climate Change | Habitats: Freshwater, Bay, Marine | Wildlife: Invertebrates, Reptiles, Amphibians
The Sonoran Blue is, according to some experts, the most beautiful butterfly in the Bay Area. Alum Rock Park in San Jose is the best place to find them, and even there it’s not easy.
Urban Nature | Wildlife: Invertebrates, Reptiles, Amphibians | <urn:uuid:1c221189-b2b4-434f-93b9-363ccbd6df37> | {
"date": "2016-08-25T13:16:02",
"dump": "CC-MAIN-2016-36",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-36/segments/1471982293468.35/warc/CC-MAIN-20160823195813-00202-ip-10-153-172-175.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9148613810539246,
"score": 2.59375,
"token_count": 465,
"url": "http://baynature.org/article/red-legged-frog-protected-near-morgan-territory/"
} |
The article that was printed in The Gazette recently titled, “Taking vitamins to prevent cancer or heart disease may backfire” is not entirely true.
These vitamins that are tested like the most popular brands Centrum and One-A-Day are 1.) in such miniscule doses they would not support disease-prevention of any kind; and 2. ) The vitamins that are tested are synthetic -- that’s the key. The body cannot identify how to use these supplements. So they are passed through the body without benefit.
For example, Centrum Silver Adult 50+ label states follows: Vitamin A-2500 IU; B1 none; B2-none; B6-3 mg; B12-25mcg, and magnesium-50 mg. just to name a few.
And the filler ingredients are too many to name. For example, Red dye #40; Yellow dye #6; corn starch; aluminum, which has been linked to Alzheimer’s. Also, maltodextrin, which is another name for MSG, in fact interrupts brain function and neurotransmitters.
A good quality vitamin that is not synthetic, the body absorbs and uses states as follows: Vitamin A-5000 IU; B1-125 mg; B2-50 mg; B6-105 mg; B12-600 mcg, and magnesium-400mg. There are some differences in the amount of each nutrient, but the key is it’s not synthetic. These vitamins have seen the light of day and were not made in a lab.
I also thought it interesting that in the last paragraph of the article stating, “The advice did not apply to children, women who are pregnant, or people with chronic illness, or people that need to take supplements because they can’t get all their essential nutrients from their diet.” Which is it? People are people. In my opinion, regardless of age, etc., everyone needs a good quality supplement to maintain health. If you ate to maintain health, one would be extremely overweight.
So after all is said and done, I would agree with the article, but change the wording from vitamins are not effective, to synthetic vitamins are not effective. | <urn:uuid:22c157da-15e9-4662-aa50-4da95d2648ce> | {
"date": "2015-02-28T06:59:48",
"dump": "CC-MAIN-2015-11",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-11/segments/1424936461848.26/warc/CC-MAIN-20150226074101-00125-ip-10-28-5-156.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9453263282775879,
"score": 2.578125,
"token_count": 456,
"url": "http://www.gastongazette.com/opinion/letters/letter-there-s-a-difference-in-vitamins-and-synthetic-vitamins-1.289198?="
} |
Trends in the 19th and 20th Centuries
Near the close of the 19th century came a new revival in the art of typography. It was led and stimulated by William Morris, an English artist, writer, and craftsman. He was unable to find types, paper, or printing that satisfied his standards. He decided to learn the art of printing.
In 1891 he founded the Kelmscott Press. He designed special types with the aid of his friend Emery Walker. Morris' books, printed by hand on handmade paper, were in the style of the finest of the early books. They were soon being imitated by a host of other private presses. Among the best of these were the Doves Press of Thomas J. Cobden-Sanderson and the Ashendene Press of C.H. St. John Hornby. Morris had an enormous influence. Among his American followers were Bruce Rogers, Daniel B. Updike, and Frederic W. Goudy.
Daniel Berkeley Updike opened the Merrymount Press in Boston in 1893. He stocked only types that met the twin criteria of economy in use and beauty of design. His books are both functional and pleasing to the eye.
Of great importance to printing in the 20th century was the designing of good typefaces for composing machines. Frederic William Goudy, the American type designer, created more than 100 faces during a long career as a printer, editor, and typographer. In 1908 he began a long association with the Lanston Monotype Corporation, for which he did much of his best work. Among his types were Forum and Trajan, which were based on the roman capital letters inscribed on Trajan's Column; Goudy Modern, his most successful text face; and a number of black-letter and display faces. Many of these were intended for the Monotype.
William Addison Dwiggins, a student of Goudy, was long associated with the publishing firm of Alfred A. Knopf, whose house style he helped to establish. Dwiggins designed a number of typefaces for the Linotype, two of which--Electra and Caledonia--have had wide use in American bookmaking. The fine types of Aldus, Garamond, and Baskerville were recut for machine use. In adapting these styles an outstanding figure was Stanley Morison of the English Monotype Company. He revived many fine old typefaces and commissioned some of the best modern ones.
During the 20th century styles in book design, as in all the arts, fine or applied, have become increasingly international. Styles born in one country spread throughout the world and die through overuse at a dizzying rate. As a consequence, it has become increasingly difficult to distinguish truly individual or national styles. Books, magazines, clothes, paintings, music--regardless of country of origin--all resemble one another far more than they differ.
Sizes of Type - Measuring Width - Fonts - Type Casting by Hand and by Machine
Invention and Spread of Type and Printing - First Designs for Roman and Italic Types
Old-Style Types by Garamond and Caslon - Bodoni Originates Modern Types
Inexpensive Fonts - BACK TO MAIN PAGE | <urn:uuid:e3e26019-c134-4b46-90d9-7921be3bdb52> | {
"date": "2015-05-25T15:16:24",
"dump": "CC-MAIN-2015-22",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207928520.68/warc/CC-MAIN-20150521113208-00254-ip-10-180-206-219.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9731812477111816,
"score": 3.0625,
"token_count": 670,
"url": "http://www.cyber-north.com/fonts/trends.htm"
} |
The disease appears first in one or two spurs and spreads in the following seasons to adjacent spurs, eventually killing the arm or cordon. Shoots developing from below the affected arm are healthy the first year but may show symptoms in subsequent seasons. Unless a major portion of the vine's structural framework is involved, the affected shoots eventually may be covered by normal overgrowth from the vine's healthy portion. It is common to find one side of the vine dead, while the other side appears healthy. When the whole vine has been killed or is severely affected by Eutypa dieback, strong suckers often develop from the still healthy root system. Complete collapse and death of vines or arms in summer is uncommon; once shoots have emerged, they usually grow through summer and die the next winter.
An important diagnostic symptom of Eutypa dieback is the formation of pruning wound cankers. These dead areas surrounding large, old pruning wounds often can only be found by removing the rough outer bark. They are frequently located adjacent to the affected spurs. In advanced cases, the wood around an unhealed wound assumes a ridged and flattened appearance so that the trunk or cordon may be twisted and malformed. Older cankers show a marginal zonation, indicating successive annual attempts of the vine to overgrow the necrotic area.
Because E. lata is a wound parasite, infection invariably occurs through pruning wounds. The fungus has a long incubation period in the vine. Several growing seasons may elapse before visible cankers develop around an infected wound or before stunted shoot symptoms appear.
Because the spores are dispersed by rain, the chance for infection can be reduced by pruning the grapevines late in the spring when rains are not as likely to occur. Late pruning is also important in reducing the susceptibility of the wounds. Research has shown that vines pruned early in the winter, when they are dormant, do not heal as quickly and the pruning wounds are susceptible to infection for 4 to 5 weeks. When vines are pruned in late winter-early spring they are beginning to come out of dormancy and the period of wound susceptibility is reduced to 10 to 14 days because they heal faster.
In the long term, wound protection offers much better control prospects than eradication once the disease has become established. Pay special attention to wound protection if drastic retraining or changeover of the variety is contemplated or if the vineyard is located in an area known to have E. lata.
See also Grapevine Trunk Diseases
The International Council of Grapevine Trunk Diseases (ICGTD) is a
non-profit organization dedicated to promoting personal contacts,
collaboration, and exchange of information among the scientists involved
in research on grapevine trunk diseases. The ICGTD maintains a
website with information about workshops, membership, references, and
scientific information about the trunk disease agents.
Gubler, W., Rolshausen, P., Trouillas, F., Urbez-Torres, J., Voege, T., Leavitt, G. and Weber, E. 2005. Grapevine trunk diseases in California (PDF). Practical Winery and Vineyard Jan/Feb 2005:6-25.
Gubler, W., and Leavitt, G. 1992. Eutypa Dieback of Grapevines. Pages 71-75 in: Grape Pest Management, 2nd edition. University of California Division of Agriculture and Natural Resources Publication 3343, Oakland, CA.
Pearson, R. and Goheen, A. 1988. Compendium of Grape Diseases. APS Press, St. Paul, MN. 93 pages.
Rolshausen, P., Trouillas, F., and Gubler W. 2004. Identification of Eutypa lata by PCR-RFLP. Plant Disease 88:925-929. | <urn:uuid:bc085e82-1445-40b2-8fc9-4dc9511f524b> | {
"date": "2013-05-22T14:39:29",
"dump": "CC-MAIN-2013-20",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701852492/warc/CC-MAIN-20130516105732-00000-ip-10-60-113-184.ec2.internal.warc.gz",
"int_score": 4,
"language": "en",
"language_score": 0.9084442257881165,
"score": 3.59375,
"token_count": 806,
"url": "http://ucanr.org/sites/intvit/?uid=64&ds=351"
} |
HOUSTON, TX (July 31, 2012) – In Holocaust Museum Houston’s newest library exhibit, viewers will encounter the Holocaust through the eyes of three artists who survived and now use their artwork to remember victims of that time.
“People of the Yellow Star,"
by Paul E. Yarden
In “We Will Never Forget: Selected Works from Max Brenner, Miriam Brysk and Paul E. Yarden,” the artists use varying mediums – painting, prints and sculptures – to highlight their experiences and preserve their memories.
The exhibit opens Friday, Aug. 17, 2012 at the Museum’s Laurie and Milton Boniuk Resource Center and Library in the Morgan Family Center, 5401 Caroline St., in Houston’s Museum District and remains on display through Dec. 31, 2012. Viewing hours are 10 a.m. to 5 p.m. Monday through Friday. Admission is free. The public also is invited to a free preview reception from 6 to 8 p.m. on Thursday, Aug. 16. Visit http://www.hmh.org/RegisterEvent.aspx to RSVP online.
“Within the Holocaust, there are millions of independent stories. Everyone went through loss, and many who survived want to give back to their community. This exhibit is important because the survivors were influenced by personal experiences, both good and bad, to create their works. Their art is their way of telling their stories,” said Carol Manley, the Museum’s director of collections and exhibitions.
Brysk was born 1935, in Warsaw, Poland. After the Nazis occupied Warsaw, Brysk’s family fled to Lida (now Belarus), her father’s birthplace. When the Germans attacked the Soviet Union in 1941, the Lida Jews were herded into a ghetto.
The Nazis arranged the mass murder of all Jews in the Lida Ghetto on May 8, 1942. Brysk’s family was placed at the front of the line, but the family was spared because of her father’s skill as a surgeon.
Rumors surfaced that all Jewish children in the ghetto would be killed while their parents were away for work. Brysk was sent to the home of a Christian woman whose daughter had been saved by Brysk’s father. After the rumors were proven false, Brysk’s family was contacted and rescued by Jewish partisans.
Upon reflection of her own childhood, Brysk found the inspiration for her new collection, “Children of the Holocaust.” The works featured in the new exhibit are, “Kupiczow-Kowel,” “Lodz-Chelmno,” “Paris-Auschwitz No. 1” and “Amsterdam-Sobibor.”
Each piece features the name of a specific child, the town in which he or she lived and the likely place where the child died. “I felt moved to consider the plights of those Jewish children who, unlike me, did not survive. I felt a compelling drive to remember through art those children who perished, and to portray the nature of their disrupted lives during the Holocaust,” Brysk said.
Her work uses a series of images collected from the Holocaust digitally printed onto a Tallis, a prayer shawl that Jewish children receive at their bar mitzvah, or coming-of-age celebration. Brysk presents each piece in the manner prescribed by Jewish tradition and offers them as a gift of remembrance to the children who were murdered before they could experience this rite of passage.
As a child, Brenner survived the ghetto and camp in Demblin, Poland. He began to sketch pictures of liberation and survival, and later his talent was recognized at a displaced persons camp in Torino, Italy by Italian engineer Enrico Segre.
“Some of my primitive sketches were of soldiers jumping down in parachutes. My dream was that they would come to free us from the Nazis,” Brenner says of his work.
Segre introduced Brenner to the contemporary Italian artist Paola Levi Montalcini, Segre’s cousin. Montalcini tutored Brenner in her private art studio and prepared him for admission into the Academy of Fine Arts.
He was admitted into the Liceo Artistico of The Academy of Fine Arts in Naples, however, his parents emigrated to the United States, where Brenner’s talent was again recognized in New York, and he was sent to study with other gifted children at the Metropolitan Museum of Art.
Holocaust Museum Houston will feature three of Brenner’s pieces from the series, “Holocaust Paintings.” “Shoes,” recalls bloody memories of his past and an image ingrained in his mind after his liberation from the concentration camp in Czestochowa. “The Survivor,” depicts a Torah that survived the destruction of the Nazis. Finally, “A Mother’s Plight,” is the shaking image of a young Jewish mother in the burning Warsaw ghetto.
Yarden was born in 1927 in a small town south of Slovakia, then Czechoslovakia. The region became part of Hungary, and the persecution of Jews began in 1938. His family’s possessions were confiscated, and they were moved into the ghettos.
Yarden was conscripted from the ghetto into a slave labor brigade by the Hungarian Fascist Army in 1938. He was taken to the Eastern Front in the Carpathian Mountains to build fortifications in inhumane conditions. Finally, he escaped and was liberated by the Soviet army on Dec. 16, 1944.
“In 1945, returning to my hometown I found that nearly all the Jews once living there were murdered, including my whole family. Only one cousin survived, returning from Auschwitz,” says Yarden.
Yarden was attracted to drawing and sculpting. His opportunity to learn came while in Israel. “In Israel, I became a pupil of the famous local sculptor Zeev Ben-Zwi who was involved at that time in creating public sculptural monuments to the Holocaust. Unfortunately, his untimely death in 1951 put an end to these efforts. Ben-Zwi taught me how to cast plaster and carve stone and had a great influence on my artistic development,” says Yarden.
The exhibit will feature three of his pieces, “People of the Yellow Star,” which expresses the helplessness fear and resignation of a Jewish family before deportation; “The Innocents,” a sculpture that depicts women and children standing before executioners; and “Survival,” which shows his son holding his grandson.
This exhibit is presented with special thanks to Avi Tiomkin.
Holocaust Museum Houston is dedicated to educating people about the Holocaust, remembering the 6 million Jews and other innocent victims and honoring the survivors' legacy. Using the lessons of the Holocaust and other genocides, the Museum teaches the dangers of hatred, prejudice and apathy.
Holocaust Museum Houston’s Morgan Family Center is free and open to the public and is located in Houston’s Museum District at 5401 Caroline St., Houston, TX 77004. For more information about the Museum, call 713-942-8000 or visit www.hmh.org. | <urn:uuid:fe2e0a0b-b5c1-40d3-a1aa-3c929881861e> | {
"date": "2016-02-13T23:42:21",
"dump": "CC-MAIN-2016-07",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701168065.93/warc/CC-MAIN-20160205193928-00344-ip-10-236-182-209.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9753048419952393,
"score": 2.53125,
"token_count": 1538,
"url": "http://www.hmh.org/ViewPressRelease.aspx?ID=410"
} |
Divine intervention in the Odyssey.
Extracts from this essay...
Divine intervention in the Odyssey Divine intervention is when the gods interfere with the theme, plot or story line in some way. Poseidon: Poseidon intervenes after Odysseus and company visits the Cyclops, Polyphemus, Poseidon's son, and blinds him. At the end of this part in the story, Odysseus tells Polyphemus his name, and Polyphemus gets Poseidon to take revenge. Poseidon does this by creating a tremendous storm, when Odysseus leaves the island of Ogygia, being released from Calypso, almost killing him; Odysseus finally landing at Scherie, where the Phaeacians live. Poseidon doing this intervenes with Odysseus' long journey home, prolonging it even more. It happens to Odysseus just after he has been held hostage by calypso for seven years.
This makes Odysseus seem like a stronger character, being looked out for by a Goddess, and makes him seem like an unbeatable hero. The first time she interferes with the plot is in the council meeting of the Gods, and she persuades them to make Calypso let Odysseus go. She then puts spirit into telemachus, Odysseus' son, and advices him, disguised as an old friend of Odysseus', to confront the suitors publicly and then, if they still stay, to go and find out information about the whereabouts of his father. She then intervenes by proposing a route to take to see Nestor and Menelaus, and accompanies him disguised as Mentor. The next time she interferes is when she arranges for Nausicaa to rescue Odysseus when he comes ashore.
All the intervention that Athene does in this book is so that she can help Odysseus as much as she can, because she likes him. Aeolus - wind-god: He intervenes in the story when he gave Odysseus all the winds in a bag so that they wouldn't stop him from reaching Ithaca. His crew members were too curious though, because they thought that it might be treasure or something, so opened the bag and let all the bad winds out, and they were blown back to Aeolus' island. Zeus: As well as intervening at the end of the book, he also intervened to take revenge for the sun-god. Odysseus' crew, who had been specifically told not to kill the sun-gods cattle, killed them anyway, and Zeus avenged this by sending them a storm, destroying the last ship and all of Odysseus' crew, leaving just Odysseus who was swept away to Ogygia.
Found what you're looking for?
- Start learning 29% faster today
- Over 150,000 essays available
- Just £6.99 a month
- Over 180,000 student essays
- Every subject and level covered
- Thousands of essays marked by teachers | <urn:uuid:e4cd8e59-e7d5-41d1-84ff-3f47dccc7db8> | {
"date": "2014-07-26T15:10:01",
"dump": "CC-MAIN-2014-23",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997902579.5/warc/CC-MAIN-20140722025822-00032-ip-10-33-131-23.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9623721837997437,
"score": 3,
"token_count": 650,
"url": "http://www.markedbyteachers.com/gcse/classics/divine-intervention-in-the-odyssey.html"
} |
History of Medicine
Click on the artwork above for a higher resolution images.
(loading time is long for slow connections)
De Humani Corporis Fabrica...
Basel, 1543. Woodcut. National Library of Medicine.
Stephen van Calcar and the Workshop of Titian
The only known first-hand likeness of Vesalius shows the formally attired anatomist grasping the dissected arm of a cadaver. Vesalius’s head is disproportionately large compared to his body, but the cadaver is larger still — leading to speculation that the engraver pulled the composition together from several sources.
To see more images from this book, visit Historical Anatomies on the Web | <urn:uuid:ab15683f-197b-4beb-b0c6-34dcc0bb75f2> | {
"date": "2015-03-30T23:06:20",
"dump": "CC-MAIN-2015-14",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-14/segments/1427131300031.56/warc/CC-MAIN-20150323172140-00234-ip-10-168-14-71.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.8810379505157471,
"score": 2.578125,
"token_count": 148,
"url": "http://www.nlm.nih.gov/exhibition/dreamanatomy/da_g_I-B-1-02.html"
} |
Read help info
Your phone can determine your geographical position using GPS (Global Positioning System). The information about your location can be used by a number of applications on your phone such as navigation, the search function or weather forecast.
1. Find "Location"
2. Turn use of GPS position for apps on or off
Proceed to step 3.
If you turn off the function:
Proceed to step 4.
3. Select option
Turn on satellite-based and network-based GPS, go to 3a.
Turn on network-based GPS, go to 3b.
Turn on satellite-based GPS, go to 3c. | <urn:uuid:3d3089c4-144c-4a48-aade-1ab18f730e3a> | {
"date": "2018-12-10T06:31:51",
"dump": "CC-MAIN-2018-51",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376823318.33/warc/CC-MAIN-20181210055518-20181210081018-00016.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.8150137662887573,
"score": 2.75,
"token_count": 136,
"url": "https://deviceguides.vodafone.ie/sony/xperia-z2/explore/turn-gps-on-or-off/"
} |
An Australian research team recently announced that they have found an effective way to kill the destructive crown-of-thorns starfish, which is devastating coral reefs
across the Pacific and Indian oceans.
The discovery by James Cook University’s Centre of Excellence for Coral Reef Studies in Queensland comes after a study showed the Great Barrier Reef had lost more than half its coral cover in the past 27 years.
Outbreaks of the large, poisonous and spiny starfish, which feast on coral polyps, was linked to 42% of the destruction.
Researchers said they have developed a culture that infects the starfish with bacteria
and can destroy them in as little as 24 hours.
The bacteria also spreads to other starfish that come near or into contact with an infected individual.
The next step will be tests to see if it is safe for other marine life, particularly fish.
“In developing a biological control you have to be very careful to target only the species you are aiming at, and be certain that it can cause no harm to other species or to the wider environment,” said Morgan Pratchett, a professor at the centre.
“This compound looks very promising from that standpoint – though there is a lot of tank testing still to do before we would ever consider trialing it in the sea.”
Outbreaks around tourist sites in Australia are currently controlled using a poison injection delivered by a diver to each starfish.
If the new culture is found to be safe, it would only need a single jab into one starfish, enabling a diver to kill as many as 500 of the creatures in a single dive.
The Great Barrier Reef (Photo: AP)
Another scientist from the centre, Jairo Rivera Posada, said that over the past 50 years the starfish had caused more damage to reefs than bleaching.
“There were massive outbreaks in many countries in the 1960s and 1980s – and a new one is well under way on the Great Barrier Reef,” he said, highlighting the urgency of tackling the threat.
“In the current outbreak in the Philippines they removed as many as 87,000 starfish from a single beach,” he added.“This gives you an idea of the numbers we have to deal with.”
Posada said other fresh crown-of-thorns outbreaks have been reported from Guam, French Polynesia, Papua New Guinea and the central Indian Ocean.
Research released last week by the Australian
Institute of Marine Sciences warned that coral cover on the heritage-listed Great Barrier Reef – the world’s largest – could halve again by 2022 if trends continued.
As well as starfish, intense tropical cyclones and two severe coral bleaching events had been responsible for the damage.
The study pinpointed improving water quality as key to controlling starfish outbreaks, with increased agricultural run-off such as fertilizer along the reef coast causing algal blooms that starfish larvae feed on.
The Centre of Excellence for Coral Reef Studies scientists agreed: “Any attempts to control these outbreaks will be futile without also addressing the root cause of outbreaks, including loss of starfish predators as well as increased nutrients that provide food for larval starfishes,” it said in a statement.
Last week, the Australian government admitted the Great Barrier Reef had been neglected for decades, but said it had contributed hundreds of millions of dollars to address the issues over the past five years. | <urn:uuid:bba5147b-43bf-467a-990e-533141288894> | {
"date": "2016-02-13T06:57:27",
"dump": "CC-MAIN-2016-07",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701166222.10/warc/CC-MAIN-20160205193926-00136-ip-10-236-182-209.ec2.internal.warc.gz",
"int_score": 4,
"language": "en",
"language_score": 0.9635076522827148,
"score": 3.546875,
"token_count": 715,
"url": "http://www.ynetnews.com/articles/0,7340,L-4289576,00.html"
} |
Can disturbed sleep patterns have an impact on a child’s ability to acquire language and vocabulary?
Child refugees talk about their experience of transitioning into a new high school in Australia.
Saving students through the medium of comics.
Schools need to have a formal policy in place for how to deal with heatwaves effectively and keep children cool and well.
Focusing on the opportunity to learn gap removes the emphasis from locating "the problem" in the person, and turns our attention to the differences in access to educational resources.
17% of the Australian population is now of various Asian backgrounds. School curriculum around Asia-Australia relations needs updating to reflect demograpic changes.
Our enemy is complacency – blaming the post-codes, fixing the students not the system, and arguing for more resources to continue what is not working.
Many young children can give the false impression that they are learning to read, when in fact they are mostly guessing words from pictures or context. This test will help to identify these students.
The gap between boys and girls starts early and grows quickly.
There are lots of ways teachers greatly influence children’s outcomes, including improving motivation and resilience.
Failing to provide an appropriate education for students who are gifted increases the risk of mental health issues, boredom, frustration, and behavioural problems.
A prescriptive English curriculum is in danger of making writing boring for primary school children.
We are now seeing technology being designed with education in mind, and it's changing the way students' learn and understand.
Our schooling system needs a rethink.
Forget about the expensive gifts, for young children, it's all about the paper and the packaging this Christmas.
Educational genomics could mean tailor-made curriculum programmes can be created based on a pupil’s DNA profile.
While students enjoy learning with robots, research finds that teachers are more sceptical – worrying about their job security and technical capabilities of robots.
We need to educate children about how to behave responsibly online.
An intense night of study won't help you remember information in the long-term – and the stress of revising under pressure will likely impact on your sleep and thus your exam performance.
Do we really need to introduce a well-being league table to tackle mental health issues in schools? | <urn:uuid:4591079e-4ac4-439d-8880-4f4799e02c13> | {
"date": "2017-03-28T15:59:24",
"dump": "CC-MAIN-2017-13",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218189802.18/warc/CC-MAIN-20170322212949-00506-ip-10-233-31-227.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9485004544258118,
"score": 3.171875,
"token_count": 473,
"url": "http://theconversation.com/fr/topics/learning-164"
} |
Issue No. 03 - May/June (1999 vol. 25)
DOI Bookmark: http://doi.ieeecomputersociety.org/10.1109/32.798322
<p><b>Abstract</b>—In this paper, we describe Teapot, a domain-specific language for writing cache coherence protocols. Cache coherence is of concern when parallel and distributed systems make local replicas of shared data to improve scalability and performance. In both distributed shared memory systems and distributed file systems, a coherence protocol maintains agreement among the replicated copies as the underlying data are modified by programs running on the system. Cache coherence protocols are notoriously difficult to implement, debug, and maintain. Moreover, protocols are not off-the-shelf, reusable components, because their details depend on the requirements of the system under consideration. The complexity of engineering coherence protocols can discourage users from experimenting with new, potentially more efficient protocols. We have designed and implemented Teapot, a domain-specific language that attempts to address this complexity. Teapot's language constructs, such as a state-centric control structure and continuations, are better suited to expressing protocol code than those of a typical systems programming language. Teapot also facilitates automatic verification of protocols, so hard to find protocol bugs, such as deadlocks, can be detected and fixed before encountering them on an actual execution. We describe the design rationale of Teapot, present an empirical evaluation of the language using two case studies, and relate the lessons that we learned in building a domain-specific language for systems programming.</p>
Domain-specific languages, distributed systems, cache coherence, continuations, verification.
Satish Chandra, James R. Larus, Bradley Richards, "Teapot: A Domain-Specific Language for Writing Cache Coherence Protocols", IEEE Transactions on Software Engineering, vol. 25, no. , pp. 317-333, May/June 1999, doi:10.1109/32.798322 | <urn:uuid:e5cd270c-6c8f-4867-8344-5aab207bf3b4> | {
"date": "2017-10-19T09:23:05",
"dump": "CC-MAIN-2017-43",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187823260.52/warc/CC-MAIN-20171019084246-20171019104246-00156.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.8766245245933533,
"score": 2.515625,
"token_count": 418,
"url": "https://www.computer.org/csdl/trans/ts/1999/03/e0317-abs.html"
} |
Content on Internet’s world-wide web continues to grow at a dizzying pace. While a wealth of engaging content for learning is now available online, locating and later RE-locating websites for instructional uses is frequently challenging. As teachers, we’ve all likely had an experience similar to this one: “I know I saw a great website about that topic just last week. How frustrating I cannot find the web address again so I can share this with my students!” Social bookmarking offers a free and compelling way to address the need we all have to locate, record, later RE-locate, and share “good website finds” on the Internet. Regardless of future changes in the Internet and the content it contains, this ability to ably manage website “bookmarks” or “favorites” is likely to be an enduring skill important to both teachers and students alike. | <urn:uuid:1ae3dfdc-75a8-442c-9a96-6185599a94fb> | {
"date": "2017-10-17T20:28:22",
"dump": "CC-MAIN-2017-43",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187822488.34/warc/CC-MAIN-20171017200905-20171017220905-00156.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9516158103942871,
"score": 2.53125,
"token_count": 189,
"url": "http://docs.wesfryer.com/?article_tag=bookmark"
} |
DNS = Delayed Neurological Sequelae of Carbon Monoxide Poisoning
DNS is an abbreviation for Delayed Neurological Sequelae, a phenomenon which occurs in nearly 40% of the survivors of carbon monoxide poisoning, yet is rarely discussed or identified with those who are treated and released from the emergency room after a carbon monoxide poisoning. This is true even with those who have carboxyhemoglobin (COHb) levels above 25%. The risk factor for the onset of DNS is similar for those who have levels at or above 10%.
DNS is too some degree a misnomer because the symptoms are not necessarily delayed in onset, it is just that they may continue to worsen for up to 40 days after discharge. Where concussion survivors are at risk for a worsening of symptoms of brain damage for 72 hours post injury, in carbon monoxide poisoning, the risk for a worsening of symptoms is at least six weeks.
The classic case of DNS is where the person has seemingly had a complete recovery and then days or weeks later has a severe relapse. Even before neurological science began to appreciate the magnitude of the symptoms of brain damage in other conditions, the existence of DNS was well appreciated. From WHO (World Health Organization) 2004 guidelines for carbon monoxide poisoning:
Perhaps the most insidious effect of carbon monoxide poisoning is the delayed development of neuropsychiatric impairment within 1–3 weeks and the neurobehavioural consequences, especially in children.
When explaining DNS to clients or a jury, I always try to split out the multiple pathology that causes brain damage and other organ damage after carbon monoxide poisoning. First, there can be death or damage to cells from lack of oxygen. Carbon monoxide takes the place of oxygen in the blood meaning that when the blood circulates to cells, it may be starved of oxygen. Thus, cell death through asphyxiation may occur. Yet, except in case of death where the heart stops beating, this is usually not the most potent pathology.
Immunological Response to Carbon Monoxide Poisoning Biggest Concern
The greatest amount of disability from carbon monoxide does not come from the hypoxic period, but from the bodies own immune response to fight off the presence of the poison. The body sees carbon monoxide as a deadly poison and reacts like it might to any other toxin, setting off a chain of events. This chain of events keeps going on, even after the threat to the cell from asphyxiation is gone. This is the primary culprit in DNS. | <urn:uuid:374176cb-ee36-4525-9518-9eca19e81015> | {
"date": "2020-01-25T03:28:33",
"dump": "CC-MAIN-2020-05",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250628549.43/warc/CC-MAIN-20200125011232-20200125040232-00336.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9426618218421936,
"score": 2.90625,
"token_count": 515,
"url": "https://carbonmonoxide.com/dns-result-carbon-monoxide-poisoning"
} |
A virtual tour is just a simulation of existing location, generally comprise of a series of video images. The virtual tour may also use multimedia elements like music, effects, text and narration. Virtual tours are generally used for describing a range of photographic and video based media.
In general, panorama indicates an unbroken view as the panorama can either be panning video footage or a series of photographs. However, the phrases “virtual tour” and “panoramic tour” are mostly associated with virtual tours that are created by still cameras. Virtual tours are made up of several shots taken from a sole vantage point. The camera and the lens are generally rotated around the nodal point. The nodal point is actually the back of the lens where the light meets.
Video-Based Virtual Tours
With the increasing use of internet among people, video based virtual tours have also gained a lot of popularity. Video cameras are generally used to offer walk-through subject properties. This method has a major advantage as the point of view constantly changes throughout the pan. However, technical skills and equipment are essentially required for capturing high quality videos. Often, different software products are used for creating media rich virtual tours.
Applications of virtual tours
Virtual tours are generally developed for people who are stuck in office due to work and unable to go and enjoy their vacation. In such a situation, individuals can easily take out few minutes from their work and collect webcam links, travelogues and sites that offer virtual tours. Virtual tours are extensively used in the real estate industry and in universities. It can also allow users to view the environment online.
Different industries use the technology to help market their products and services. In the past few years, the usability, accessibility and quality of virtual tours have also increased considerably. Virtual tours are excellent for older people as they do not have to travel from one place to another.
Be the first to like. | <urn:uuid:9b55ffe0-366e-429d-a275-9c15a1c8a6a8> | {
"date": "2017-11-20T13:25:22",
"dump": "CC-MAIN-2017-47",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934806066.5/warc/CC-MAIN-20171120130647-20171120150647-00256.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9546997547149658,
"score": 3.03125,
"token_count": 391,
"url": "http://www.toursimdirectory.com/2010/07/09/virtual-tours/"
} |
Impeached traces the explosive impeachment trial of President Andrew Johnson to its roots in the social and political revolutions that rocked the South with the end of slavery and of the Civil War. As president after Lincoln’s assassination, Johnson, a Tennessee Democrat, not only failed to heal the nation’s wounds but rather rubbed them raw, ignoring widespread violence against the freed slaves and encouraging former rebels to resume political control of the Southern states. His high-handed actions were opposed by the equally angry and aggressive Congress, led by Rep. Thaddeus Stevens of Pennsylvania, an ardent foe of slavery who aimed to rebuild American society on principles of equality and fairness.
The titanic collision between Congress and the president was diverted, through the constitutional impeachment process, into a legalistic dispute over whether Johnson could fire his own Secretary of War. Inept lawyering by Johnson’s prosecutors, combined with political deals, saved Johnson by a single vote.
Impeached challenges the traditional version of this pivotal moment in American history, which portrays Johnson as pursuing Lincoln’s legacy by showing leniency to the former rebels. Impeached shows the compelling reasons to remove this unfortunate president from office, reveals the corrupt bargains that saved Johnson’s job by a single senator’s vote, and credits Johnson’s prosecutors with seeking to remake the nation to accord with the ideals that Lincoln championed and that the Civil War was fought for. | <urn:uuid:e29d0b09-bdf6-4082-a761-22e4605a1398> | {
"date": "2014-09-02T01:45:17",
"dump": "CC-MAIN-2014-35",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1409535921318.10/warc/CC-MAIN-20140909055359-00495-ip-10-180-136-8.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9643915891647339,
"score": 2.703125,
"token_count": 291,
"url": "http://davidostewart.com/books/impeached/synopsis/"
} |
- Print Editions
- Mobile Edition
- August 2016
- July 2016
- June 2016
- May 2016
- April 2016
- March 2016
- February 2016
- January 2016
- Breaking News
Solar Energy in New Mexico
PRC Commissioner Jason Marks
A few months after the enormous BP oil spill in the Gulf of Mexico last year, the head of the solar energy industry had his own announcement to make: “There was a major spill of solar energy yesterday,” he solemnly told attendees at an energy conference, before concluding, “Everyone agreed it was a beautiful day.” This bon mot plays on the widespread (and accurate) perception that solar energy is a safe and clean source of power. But is solar energy a serious source of power that can displace fossil fuels on a large scale and at a reasonable cost? This article discusses how we can use solar power and other renewable energy technologies to reach the dramatic reductions in greenhouse gas emissions needed to avoid catastrophic climatic disruption.
New Mexico is blessed with substantial raw natural resources of wind and solar power. During the past decade, renewable energy development focused on wind energy, both in our state and nationally. Between 2000 and 2010, the amount of installed wind generation capacity in the U.S. increased from around 2,000 megawatts (MW) to over 40,000 MW. New Mexico saw 700 MW of wind capacity installed during this period, enough to supply the power needs of more than 200,000 homes, assuming optimal wind conditions. Wind power’s leading role in the renewable world is largely due to its relatively low costs, comparable at times to the costs of fossil fuel (natural gas) generated electricity. So, the more we can do with wind, the lower the impact to our pocketbooks when we substitute a portfolio of renewable energy for fossil fuels.
But wind energy suffers from being an intermittent source whose availability often does not coincide with the times when we use the most electricity. For example, production of electricity from PNM’s wind farm near Santa Rosa generally peaks during the nighttime hours in winter and spring months, while coming in close to zero on hot summer afternoons when electricity demand is highest. The good news is that when engineers from General Electric, working with the National Renewable Energy Laboratory in Boulder, Colo., modeled hour-by-hour wind availability across the western U.S., electricity demand and existing generation and transmission lines, they concluded that we could fairly easily rely on wind power for up to 30% of our annual average electricity needs. This is a very significant target—and something we need to work towards—but it leaves us short of where we need to be in order to reduce our fossil fuel emissions to safe levels.
That’s where solar comes in. The technical potential of the solar resource in New Mexico alone is enormous: enough to produce more electricity than the entire country uses. Solar energy naturally coincides to an extent with our patterns of electric demand, peaking at mid-day and during the summer months. It can be cost-effectively matched to energy storage (more on this below). And, for the desert Southwest states, solar is a “load-side” resource that can be developed close to population and industrial centers, without the need for long, expensive transmission lines. In fact, solar is so load-side, it can be deployed in the midst of our cities, on buildings, parking structures, and so on. Despite these advantages, solar energy development lagged due to high costs and lack of sufficient investment on the part of our utilities. In 2007, Ben R. Lujan and I enacted PRC rules requiring New Mexico utilities to deploy utility-scale solar projects and to support customer-sited solar-distributed generation. Back then, we had about 200 kilowatts (KW) of solar electricity in the state, powering about 100 homes. Today, we are seeing the results of our solar rules. Over 2,000 homeowners and business participate in solar “REC” (renewable energy certificate) incentive programs mandated by the PRC, producing their own clean energy in the Albuquerque area, Las Cruces, Santa Fe, and even eastern N.M. This adds up to over 17 MW of customer-sited distributed solar generation. On the utility side, we have 150 MW of solar generation in operation, under construction, or contracted for construction during 2012. (No Solyndras here, these projects are all “nailed down.”) Together, this is enough solar electricity to fully supply 55,000 homes.
Recent solar development in NM has focused on photovoltaic (PV) technology, the now-familiar panels that convert photons, carrying solar energy directly into electricity without any moving parts. The less well-known form of solar technology is solar thermal (also known as concentrating solar power or CSP). Most solar thermal power plants use mirrors to concentrate the sun’s heat and then use the heat to make steam that can turn a power-generating turbine. This technology was pioneered at Sandia Labs in the 1970s, leading to commercial-scale demonstrations in California’s Mojave Desert in the 1980s that operate with outstanding reliability to this day. Over the past decade, the hub of solar thermal activity has been in southern Spain, with one of the main goals being the development of energy storage that can turn solar energy into a dispatchable (it’s there when you need it) source of electricity.
The newly constructed Gemasolar plant near Seville stores solar heat it collects from 2,650 mirrors focused on a central power tower in a large tank containing tons of molten potassium salts. Operators then extract the heat as needed to make steam and electricity. With 15 hours of storage capability, Gemasolar produces a constant output of 19 MW of electricity from its steam turbine 24 hours a day during summer months. During winter months, the operators plan to match output to the shape of electric demand; for example, shutting down in the dead of night and spooling up in the early morning hours.
Solar thermal power, with its storage capabilities, is an essential element of our future clean energy portfolio. It’s hard for me to see how to make renewable energy our majority power source, truly displacing today’s fossil-fuel electric fleet, without developing large amounts of solar thermal storage. Storing large amounts of energy from wind or solar PV does not appear to be economically feasible with current technologies (although there are significant opportunities to use batteries onboard or even retired from electric vehicles for grid storage and balancing). Biomass and geothermal power plants are capable of producing renewable-sourced electricity around the clock, but these sources are not available in sufficient quantities to supply a major portion of our energy demands.
Construction is moving forward on three large solar thermal plants in the U.S., including the 280 MW Solana project in Arizona, designed with six hours of storage to serve the summer afternoon/early evening demand peak. But many other planned solar thermal projects have stalled or been cancelled, including a 90 MW plant that was to be built near Las Cruces to serve NM customers of El Paso Electric Co.
Ironically, one of the main factors in the cancellation of solar thermal projects has been the market success of solar PV. Prices for large-scale PV systems have dropped by more than half over the past five years. Factoring in the effect of solar tax credits, costs to the power purchaser are down even more. PV went from being a more expensive solar option for utilities to being the lower-cost and more flexible option. Current bids for utility-scale PV projects are in the range of 10 to 12 cents per kilowatt-hour, versus 14 to 17 cents for solar thermal. PV is also easier to site and finance, and can be cost-effectively deployed at moderate scale; e.g., 5-20 MW, if desired. (Several of the cancelled solar thermal projects were reengineered as PV projects, including the one planned for southern N.M., which turned into a 20 MW PV project that recently went online.)
The dramatic price drops for solar PV, along with PRC-directed incentive programs, have made it possible for over 2,000 New Mexico households, businesses, schools, and other electric customers to invest in their own distributed solar systems. PV prices will continue to decline as technological advances continue to come online and the markets continue to evolve. When I first heard about “grid-parity,” the levelized cost of a PV system for an electric customer reaching the cost of buying power from the utility, I was skeptical. But it’s a reality today in places with unusually high electricity prices like Hawaii, and within reach elsewhere for larger commercial customers, possibly as soon as five years or so.
In addition to moving us down the road toward a more environmentally sustainable energy supply, PRC policies supporting customer-sited distributed solar generation have also led to the development of a vibrant solar installation industry in the state, with many good-paying jobs. Competition between these companies, in turn, furthers our goals of continuing to bring down prices.
Individuals who follow the solar industry in NM know that since last year, the PRC has implemented mechanisms to reduce the per-kwh REC incentive rates for new customer-sited solar systems. (An important principle I have worked to sustain is that the economics should be locked in for existing systems once the owner has signed an REC contract with the utility.) Declining incentive rates are in recognition of the declining prices for new systems, and allow the Commission to incentivize more systems and more kilowatts for the same amount of money.
It’s also important to remember that the money for REC incentives, as well as for utility-sponsored renewable projects, comes out of electric rates charged to all customers. Thus far, we’ve been able to significantly advance solar in the state with an extremely modest impact on rates, which will begin showing up on bills in the next year or so at around 2%. Because we have been careful in calibrating solar REC incentives and other renewable-energy program expenditures, we’ve been able to support an aggressive rate of growth without creating a superheated, unsustainable situation. Over the next few years, we will still have headroom in terms of what New Mexicans are willing to invest to get a cleaner energy supply. We need to continue to support the dual tracks of customer-sited distributed generation and utility-scale renewable energy projects, including some new utility wind energy projects and some biomass and geothermal.
From time to time, I run in to people who believe the future should be entirely rooftop,-distributed solar —clean, locally controlled energy at lower costs because we’ve cut out the utility company. While I understand the ideals behind this vision, it’s frankly not realistic. To begin with, we are not going convince Americans to change their lifestyles so that we do all our TV watching, data processing and websurfing, washing, heating, cooling and every other energy intensive activity between the hours of 9 a.m. and 4 p.m., and then only on sunny days! Customer-sited PV is a seamless solution because it is tied to the grid, with power being able to flow both ways, depending on production and consumption. Secondly, while the costs of using renewable energy and other strategies to reduce greenhouse gas emissions to safe levels is not anywhere near as high as the partisan opponents of clean energy claim, we need to be honest that cleaning things up will increase our energy bills. In order to keep the cost impacts manageable, we need the economies of scale that utility-sized renewable energy projects bring. We also need utility-based projects in order to ensure that everyone is using clean energy, not just those who are willing and able to make their own investments. And, as discussed above, the solar thermal technologies with storage that we will most likely need if we want to take renewable energy from a side dish to our main energy course, are only practical at a large scale.
Of course, I could be wrong on this. Our long-term clean-energy solution could look different from what I envision. For the past six years, as a PRC commissioner, I’ve tried to chart a course to foster the technologies and industries that we are likely to need to transition away from fossil fuel dependency. My colleagues in many state legislatures, governors’ offices and utility commissions across the country have tried to do the same, with renewable portfolio standards and various technology preferences, like NM’s solar set-asides. These policies have successfully given us tens of thousands of megawatts of clean energy generation, advanced the technologies and reduced the costs and created green industries and green jobs.
But we need to move faster and in different ways in order to have a hope of mitigating the impending greenhouse gas-caused climate catastrophe. I’m convinced it’s time to put a price on greenhouse gas emissions and allow markets to find the best mix of solutions. This has to be done at an economy-wide level. My personal belief is that we’re better off with a simple tax-and-rebate approach, rather than a complex “cap and trade” system, with numerous loopholes and opportunities for financial traders to turn emissions trading into another Wall Street casino.
Because our problems are serious and urgent, we need to make the constraints on carbon dioxide and other greenhouse gas emissions stringent enough to dramatically accelerate the deployment of GHG-free energy sources like solar, while forcing our nation’s fleet of conventional coal plants into retirement. I am confident that once the U.S. begins to capture the real costs of dirty fossil-fuel energy production, our abundant New Mexico solar and wind resources will be perfectly positioned to play a leading role in an environmentally sustainable and economically prosperous future.
Jason Marks is in his second term on the New Mexico Public Regulation Commission, where he has played a leading role in the implementation of the state’s renewable energy laws, as well as protecting consumers from millions of dollars in unjustified utility and telecom rate increases. Marks has a bachelor’s degree from Reed College and a law degree from the University of New Mexico.
About the author
The Green Fire Times is published by Skip Whitson, edited by Seth Roffman with design by Anna Hansen, webmaster Karen Shepherd and Breaking News editor Stephen Klinger. All authors retain all copyrights. If you need to contact a particular author, or want to write for us, please be in touch.
|Print article||This entry was posted by Green Fire Times on November 4, 2011 at 12:53 am, and is filed under November 2011. Follow any responses to this post through RSS 2.0. You can leave a response or trackback from your own site.| | <urn:uuid:80bf1d7a-ff98-44d4-bbec-d6180eb49c28> | {
"date": "2016-08-30T05:10:29",
"dump": "CC-MAIN-2016-36",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-36/segments/1471982969890.75/warc/CC-MAIN-20160823200929-00024-ip-10-153-172-175.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9451517462730408,
"score": 2.78125,
"token_count": 3053,
"url": "http://greenfiretimes.com/2011/11/solar-energy-in-new-mexico/"
} |
Robson, S; James, MR; (2007) Photogrammetric image sequence processing to determine change in active lava flows. In: Proceedings of the 2007 Annual Remote Sensing and Photogrammetry Society (RSPSOC2007). Remote Sensing and Photogrammetry Society: Newcastle.
Full text not available from this repository.
Understanding the processes involved with the advance of lava flows is critical for improving hazard assessments at many volcanoes. Here, we describe the application of computer vision and oblique photogrammetric techniques to visible and thermal images of active lavas in order to investigate flow processes at Mount Etna, Sicily and on pahoehoe flows in Hawaii. Photogrammetric surveys were carried out to produce repeated topographic datasets for calculation of volumetric lava flux at the flow-fronts. Photogrammetry is an established technique for the investigation of change in landform over time, relying typically on vertical aerial imagery or more unusually, on oblique imagery from aircraft or terrestrial platforms. This paper describes experiences in processing data from terrestrial digital photogrammetric surveys of lava flows acquired with digital photogrammetric SLR cameras at a number of sites which have active lava flows. In each case the objective was to ascertain flow evolution over time using a sequence of oblique imagery, captured from multiple locations. Data processing was carried out using VMS software to solve the imaging geometry and to deliver seed points for stereo and multi-photo matching. Stereo matching was carried out using UCL’s gotcha matching package which incorporates a combination of pyramidal and Otto-Chau region growing algorithms to produce topographic models which were then automatically image draped for visualisation within VMS software. For the Hawaii data, sequences of image pairs in which the lava field advanced over time were evaluated using the pyramidal features of the gotcha matcher in order to allow the propagation of seed points from one temporal image pair to the next. Because of the four-level pyramid chosen, this process could utilise sub-sampling of the 6MP resolution of the input images in order to account for camera vibrations caused by the environment. At each temporal epoch the orientation of each image was refined by tracking features in stable terrain. Example results will be shown demonstrating tracking through a 36 minute sequence. Such results allowed the computation of volume flux which could then be used to validate flux measurements calculated using cooling trends from thermal images.
|Title:||Photogrammetric image sequence processing to determine change in active lava flows|
|Event:||Remote Sensing and Photogrammetric Society, Annual Conference, Newcastle, September 2007|
|UCL classification:||UCL > School of BEAMS > Faculty of Engineering Science > Civil, Environmental and Geomatic Engineering|
Archive Staff Only: edit this record | <urn:uuid:18011216-3066-4272-a6f8-8188413157df> | {
"date": "2015-02-01T16:41:47",
"dump": "CC-MAIN-2015-06",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-06/segments/1422121744242.57/warc/CC-MAIN-20150124174904-00020-ip-10-180-212-252.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.8777053356170654,
"score": 2.65625,
"token_count": 571,
"url": "http://eprints.ucl.ac.uk/38069/"
} |
A Winmodem, like other modems, is used for accessing to phone services, as BBS, Internet, Voice Phone, Fax, etc. It is raccorded to a phone line and is caracterized by its speed. If you want to learn more about modems, I report you to the Modems-HOWTO.
But they are WINmodems. That is, they need Windows to work. Why ? Simply because they are stupid. They need a special software, a driver, to accomplice their complete task. Who says software says OS, and the drivers included with the modem are, at 99%, exclusively for the MS-Windows platform. But with the democratization of Linux, some manufacturers, like LT or Motorola, decided to create a linux driver for their modems. But they have not understand linux philosophy: the drivers they provide works, of course, but they are 'Closed Source'. They are free, at the sense of the price, but not under the GPL. It means that the sources are not available.
So, some 'hackers' decided to make an Open Source driver, but they do not know a lot about their modems, because manufacturers don't want to communicate the specifications of their modems, so the OpenSource drivers are often in the alpha or beta status.
1. Try to get the name of the serial port where your modem is connected (under Windows or MSDOS, example: COM1 COM2, ...).
The name of your modem under Linux is /dev/ttySx, where x is the number of the serial port under DOS, - 1. < Example: Under DOS: COM1, under Linux ttySx, with x=1-1, so ttyS0
2nd example: Under DOS: COM3, under Linux ttyS2
and so on
2. Make a symlink from /dev/ttySx to /dev/modem, by typing
rm -f /dev/modem<p> ln -s /dev/ttySx /dev/modem
3. Download and install the minicom package. Then run 'minicom -s'.
Choose 'Serial Port Setup', type 'A' for setting 'Serial Device', delete all the line, and type '/dev/modem'. Then validate by [Enter]. Type [Esc], and choose 'save setup as dfl, then choose 'Exit'.
Wait a little time, then type 'AT' if the modem answers 'OK' then you have NOT a Winmodem, you have a standard modem...
If the initialisation time takes too long, then you have a Winmodem. Use this document for try making it useful. Log in as root.
4. Exit from Minicom by typing CTRL+A, then X. | <urn:uuid:3b5f62c1-a1bb-417f-a2df-aa23b4c538d1> | {
"date": "2015-07-02T00:52:11",
"dump": "CC-MAIN-2015-27",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375095346.56/warc/CC-MAIN-20150627031815-00306-ip-10-179-60-89.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9079198837280273,
"score": 2.59375,
"token_count": 594,
"url": "http://www.linuxdoc.org/HOWTO/Winmodems-and-Linux-HOWTO-1.html"
} |
Church Slavonic makes extensive use of diacritic marks. These markings indicate pronunciation and grammatical information about words.
Stress marks indicate which syllable should receive slight vocal emphasis. Stressing the proper syllable in Church Slavonic is very important.
For the beginner hearing and vocalizing the stressed syllable can be difficult. Listen closely to the audio versions of prayers this site and in the E-Tutor. Develop the habit of proper stressing from the outset even though it will seem very ackward and forced at first.
Service books make extensive use of text abbreviations called "titlos". Abbreviations must be memorized. Abbreviated words are typically those of significant persons (the Theotokos, Our Lord, etc) or concepts ( heaven,pray,holy,ect.).
Abbreviation marks take several forms. Also view a printable list of abbreviations in gif format. Pdf Abbreviations
All online prayer texts in this site have no abbreviations to make it easier for the beginner to learn the prayers.
Voicing (breathing) Symbol
Voicing signs are a holdover from Greek and serve only as decorative elements.
The Lord's Prayer With Abbreviations
Lord's Prayer With No Abbreviations | <urn:uuid:44139eb0-06d6-497c-b997-425d46372875> | {
"date": "2015-04-25T23:14:18",
"dump": "CC-MAIN-2015-18",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246651873.94/warc/CC-MAIN-20150417045731-00148-ip-10-235-10-82.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.8710913062095642,
"score": 2.515625,
"token_count": 266,
"url": "http://orthodoxepubsoc.org/slavonichelp.htm"
} |
Submitted by Taps Coogan on the 7th of April 2018 to The Sounding Line.
As part of our ongoing series of historical video-maps, we present: ‘Visualizing Empires Decline’ from Pedro M. Cruz. Unlike most of the video-maps we feature here at The Sounding Line, this not a map per say, but shows bubbles whose size represents the total land extent of the four major European empires: Britain, France, Spain, and Portugal. As each empire gains or loses territory, the bubbles absorb or eject chucks of land and change size.
The pace at which empires collapse can be startling and is perhaps best exemplified by France. The French Empire slowly grew over hundreds of years and maintained the bulk of its territorial extent into the 1950s only to implode seemingly overnight in 1960 during the Algerian War.
To see other interesting historical maps check out:
The History of the Greeks
Every Year of the Roman Empire
Every Other Day of the Napoleonic Wars
Every Day of World War I
Every Day of World War II
The History of the World Every Year
The Five Largest Cities Throughout History
Every Major Plague Epidemic in History
The Evolution of Modern Government
The Rise of Religions Throughout History
The History of Communism Since 1850
The History of Urbanization
The History of South America
How the World Got Obese
The History of North America
The History of China Every Year
Every Nuclear Explosion in History
Every Year in The History of the Ottoman Empire
The History of the Middle East
P.S. If you would like to be updated via email when we post a new article, please click here. It’s free and we won’t send any promotional materials. | <urn:uuid:4c921ff4-2ba1-4461-87f8-7f83b6f1593e> | {
"date": "2019-09-23T13:56:55",
"dump": "CC-MAIN-2019-39",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514576965.71/warc/CC-MAIN-20190923125729-20190923151729-00296.warc.gz",
"int_score": 4,
"language": "en",
"language_score": 0.8484992980957031,
"score": 3.5625,
"token_count": 366,
"url": "https://thesoundingline.com/map-of-the-day-visualizing-empire-decline/"
} |
Cytomegalovirus (CMV) is a member of a group of herpes-type viruses that can cause disease in different parts of the body in people with weakened immune systems.
Cytomegalovirus - immunocompromised host
Causes, incidence, and risk factors:
Most humans are exposed to CMV in their lifetime, but typically only individuals with weakened immune systems become ill from CMV infection. Usually, CMV produces no symptoms. However, serious CMV infections can occur in people with weakened immune systems due to AIDS , organ transplants, bone marrow transplant , chemotherapy , or medicines that suppress the immune system.
A CMV infection may affect different parts of the body. Infections include:
Once a person becomes infected, the virus remains alive, but usually dormant, within that person's body for life. Rarely does it cause recurrent disease, unless the person's immune system is suppressed due to medication or disease. Therefore, for most people, CMV infection is not a serious problem.
Primary CMV infection in pregnant women can cause harm to the developing fetus. See: Congenital cytomegalovirus
The symptoms of CMV infection are similar to those of mononucleosis. In fact, in a small percentage of people with mononucleosis, CMV is the cause. The symptoms of primary CMV infection are:
- General discomfort, uneasiness, or ill feeling (malaise )
- Joint stiffness
- Loss of appetite
- Muscle aches or joint pain
- Night sweats
- Prolonged fever
- Weight loss
In immunocompromised people, CMV can attack specific organs. The major symptoms of these organ-specific infections are:
- Visual impairment
- Ulcerations with bleeding
Signs and tests:
Blood and urine tests can detect and measure substances specific to CMV. A tissue biopsy may also be done.
Several antiviral medications are available to treat CMV. These medicines require close monitoring for adverse reactions. Antiviral drugs can help stop the virus from copying itself within the body. However, the drugs do not eliminate the virus from the body.
CMV infection in an immunocompromised host can be life-threatening. The severity of the disease depends on the strength of the person's immune system. Research has shown that people who have had a bone marrow transplant have the highest mortality risk.
Any immunocompromised person, whether an HIV patient, organ transplant recipient, bone marrow transplant recipient, or medically immunosuppressed person, should seek medical advice if any signs of infection occur.
- Kidney impairment (from medications used to treat the condition)
- Low white blood cell count (from medications used to treat the condition)
Calling your health care provider:
Call your health care provider if you are immunosuppressed and you have symptoms of CMV infection.
The following should be tested for CMV:
|Review Date: 11/1/2007|
Reviewed By: Kenneth M. Wener, M.D., Department of Infectious Diseases, Lahey Clinic, Burlington, MA. Review provided by VeriMed Healthcare Network.
The information provided herein should not be used during any medical emergency or for the diagnosis or treatment of any medical condition. A licensed medical professional should be consulted for diagnosis and treatment of any and all medical conditions. Call 911 for all medical emergencies. Links to other sites are provided for information only -- they do not constitute endorsements of those other sites. © 1997-
A.D.A.M., Inc. Any duplication or distribution of the information contained herein is strictly prohibited. | <urn:uuid:988781a0-868f-4013-b910-42d1c92b7881> | {
"date": "2014-11-26T17:13:48",
"dump": "CC-MAIN-2014-49",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-49/segments/1416931007301.29/warc/CC-MAIN-20141125155647-00220-ip-10-235-23-156.ec2.internal.warc.gz",
"int_score": 4,
"language": "en",
"language_score": 0.902198076248169,
"score": 3.734375,
"token_count": 757,
"url": "http://www.emanuelmed.org/body.cfm?id=14&action=detail&AEArticleID=000663&AEProductID=Adam2004_1&AEProjectTypeIDURL=APT_1"
} |
Only 11 percent of all engineers in the U.S. are women, according to Department of Labor. The situation is a bit better among computer programmers, but not much. Women account for only 26 percent of all American coders.
There are any number of reason for this, but we may have overlooked one. According to a paper recently published in Journal of Personality and Social Psychology, there could be a subtle gender bias in the way companies word job listings in such fields as engineering and programming. Although the Civil Rights Act effectively bans companies from explicitly requesting workers of a particular gender, the language in these listings may discourage many women from applying.
The paper — which details a series of five studies conducted by researchers at the University of Waterloo and Duke University — found that job listings for positions in engineering and other male-dominated professions used more masculine words, such as “leader,” “competitive” and “dominant.” Listings for jobs in female dominated professions — such as office administration and human resources — did not include such words.
A listing that seeks someone who can “analyze markets to determine appropriate selling prices,” the paper says, may attract more men than a list that seeks someone who can “understand markets to establish appropriate selling prices.” The difference may seem small, but according to the paper, it could be enough to tilt the balance. The paper found that the mere presence of “masculine words” in job listings made women less interested in applying — even if they thought they were qualified for the position.
Shanley Kane, a software product manager in the Bay Area, says these subtleties should not be overlooked. “It’s worth paying special attention to how the ‘masculine-themed’ words they tested for — competitive, dominate, leader — denote power inequalities,” she explains. “A leader has followers. A superior has an inferior.”
“Imagine living in a world where every errant utterance you make is preserved forever,” writes Danger Room’s Robert Beckhusen. That’s what DARPA is working on:
Analyzing speech and improving speech-to-text machines has been a hobby horse for Darpa in recent years. But this takes it a step further, in exploring the ways crowdsourcing can make it possible for our speech to be recorded and stored forever. But it’s not just about better recordings of what you say. It’ll lead to more recorded conversations, quickly transcribed and then stored in perpetuity — like a Twitter feed or e-mail archive for everyday speech.
With regard to psychopaths, “We think the ‘uhs’ and ‘ums’ are about putting the mask of sanity on,” Hancock told LiveScience.
Psychopaths appear to view the world and others instrumentally, as theirs for the taking, the team, which also included Stephen Porter from the University of British Columbia, wrote.
As they expected, the psychopaths’ language contained more words known as subordinating conjunctions. These words, including “because” and “so that,” are associated with cause-and-effect statements.
“This pattern suggested that psychopaths were more likely to view the crime as the logical outcome of a plan (something that ‘had’ to be done to achieve a goal),” the authors write.
And finally, while most of us respond to higher-level needs, such as family, religion or spirituality, and self-esteem, psychopaths remain occupied with those needs associated with a more basic existence.
Once scientists have perfected the science of how stories affect our neurochemistry, they will develop tools to “detect narrative influence.” These tools will enable “prevention of negative behavioral outcomes … and generation of positive behavioral outcomes, such as building trust.” In other words, the tools will be used to detect who’s been controlled by subversive ideologies, better allowing the military to drown out that message and win people onto their side.
A couple years ago I would have dismissed this, but data scientists are getting closer to being able to pull this sort of thing off. I’d still say this is years off, but it’s edging closer to the realm of possibility.
I’m not sure what the sample size is, or how old the adults in the study are, but:
Ferman and Avi Karni from the University of Haifa, Israel, devised an experiment in which 8-year-olds, 12-year-olds and adults were given the chance to learn a new language rule. In the made-up rule, verbs were spelled and pronounced differently depending on whether they referred to an animate or inanimate object.
Participants were not told this, but were asked to listen to a list of correct noun-verb pairs, and then voice the correct verb given further nouns. The researchers had already established that 5-year-olds performed poorly at the task, and so did not include them in the study. All participants were tested again two months later to see what they remembered.
“The adults were consistently better in everything we measured,” says Ferman. When asked to apply the rule to new words, the 8-year-olds performed no better than chance, while most 12-year-olds and adults scored over 90 per cent. Adults fared best, and have great potential for learning new languages implicitly, says Ferman. Unlike the younger children, most adults and 12-year-olds worked out the way the rule worked – and once they did, their scores soared. This shows that explicit learning is also crucial, says Ferman, who presented the results at the International Congress for the Study of Child Language in Montreal, Canada, this week.
Right now, troops trying to listen in on enemy chatter rely on a convoluted process. They tune into insurgency radio frequencies, then hand the radio over to local interpreters, who translate the dialogues. It’s a sloppy process, prone to garbled words and missed phrases.
What troops really need is a machine that can pick out voices from the noise, understand and translate all kinds of different languages, and then identify the voice from a hit list of “wanted speakers.” In other words, a real-life version of Star Wars protocol droid C3PO, fluent “in over 6 million forms of communication.”
Now, the Pentagon’s trying to fast-track a solution that could be a kind of proto-proto-prototype to our favorite gold fussbudget: a translation machine with 98 percent accuracy in 20 different languages.
Darpa, the military’s experimental research agency, is launching the Robust Automatic Translation of Speech program to streamline the translation process. (That’s “RATS,” for short. Ouch.)
“In effect, we discovered how the brain’s dictionary is organized,” said Just, the D.O. Hebb Professor of Psychology and director of the Center for Cognitive Brain Imaging. “It isn’t alphabetical or ordered by the sizes of objects or their colors. It’s through the three basic features that the brain uses to define common nouns like apartment, hammer and carrot.”
As the researchers report January 12 in the journal PLoS One, the three codes or factors concern basic human fundamentals:
1. how you physically interact with the object (how you hold it, kick it, twist it, etc.);
2. how it is related to eating (biting, sipping, tasting, swallowing); and
3. how it is related to shelter or enclosure.
The three factors, each coded in three to five different locations in the brain, were found by a computer algorithm that searched for commonalities among brain areas in how participants responded to 60 different nouns describing physical objects. For example, the word apartment evoked high activation in the five areas that code shelter-related words.
By summary of way, this article intends to reframe your understanding of literacy before condensing the bulk of the content presented across the body of the document down to four simple steps for deeper exploration. First off, you’ll note the pretentious title. Before we get started, let me ask you to click this link. Don’t worry, it’ll open in an entirely new window, and you won’t lose your place here. I asked you to click the link to distract you from the pretentious title, but that title is likely what lead you to read at least the first three sentences in this paragraph. What does this mean?
I’m just reading over some design sites trying to fill in my afternoon here and came across this interesting piece on the wonderful A Brief Message:
Your most intuitive, meaningful, and devastatingly clever design is worthless – unless it’s shallow enough to appeal in the first five seconds.
Most of the time, that’s all you’ll get before they walk, click, or turn away.
Every day, millions go window shopping. Flip through magazines or channels. Walk bookstore aisles, quickly judging each book… by its cover.
Ask us what we’re looking for, however, and most of us won’t know. Though we can’t articulate what we want, it’s clear that we all know it when we see it. Design helps us see it.
With more email, more channels, and more data, we’re left with less time. And more and more, we’re forced to make decisions in a split second, often based on less information than before.
Though we may think of design as a process that runs deep, often it works at very superficial levels.
It’s here that design plays an increasingly important role: communicating a concept, feeling, or attitude in a moment. It condenses the larger body of information that we’re no longer willing (or able) to attend to, and conveys it instantly. It’s what good design has always done, and it’s more important than ever.
This makes me wonder about the state of selling things as quickly as possible. Not just products/services, but people, too. The douchebag New Jersey kids with spray-on tans, the ditzy bar hussies who spend too much time thinking about their hair, people in general with no practical experience with their own subjective opinions.
It has to do with this post I recently made on the difference between how Americans the French can tell when they’re full. One group grows up being told to eat everything on their plate, and feels dissatisfied till they do. The other, they eat and drink only until they’re comfortable and sense they’re comfortable capacity has been met.
After observing the whole national movement which garnered around the Internet vs Scientology, I have to wonder: how do we inspire a Fight Club-like knowledge of subjective value and worth?
At the heart of the occult arts is the Art of knowing the limitless that exists within each one of us. And even that doesn’t do the concept justice, as we’re all One and we can shape and experience things in a multitude of levels, every living moment we’re gifted with on this plane.
So how might the Few go about designing interactions that are both attractive at face value, but also inspire a deeper interaction. Not an easy question, I know. But I want to know if any readers’ personal experiences testing those around them have produced results we can share here.
One experiment I came up with my friend was to detail three adjectives about your closest friends, the Why that you like them, Why they are your friends. Seems a pattern emerges after you go through enough friends, and the adjectives used seem to reflect things about ourselves. This reflects the old ideas that we can only know ourselves through those around us.
It also raises some interesting questions ? la Prometheus Rising. What happens when you have a dear friend that is a skinhead and another that is a Bible-thumping Christian, as I do. Dropping labels from this we find a few characteristics of each person that define why I like them as people and hold them dear. Then there are a bevy of other characteristics they have that might not be to my liking, but I overlook them in favour of the way my preferred characteristics make me feel in their presence.
I might not like the skinhead’s disposition towards violence, but I admire his intellect. The Christian’s unquestioning faith in something they’ve been led to believe in drives me up the wall, but also intrigues me – but overall, I am elated by the sexual chemistry between us that is only amplified by these other differences.
What does this say about them? Not a lot, aside from that the skinhead is intelligent (as many typically seem to be), and that the Christian is sexually flustered and willing to take flirtation to a level of art that permiscuous women aren’t capable of (due to the relative ease of putting the penis in the va-jay-jay).
On the other hand, what does this say about me? Might be a poor example of my character, but it would seem you could accurately say I enjoy both intelligence in thought (even aggressive philosophies that might characterise the skinhead stereotype) and that I get off on flirting. Why are different, these are subjective things that I’ve come to learn about myself. Over the years, it’s been no secret that I’m fond of the controversial philosophies of the likes of Julius Evola (Italian fascist occultist) and that while I admire the layers upon layers of subtle sexual innuendo that flirting can bring about, the actual act can be a bit of a let-down and I am not an overly sexual person by nature. (I feed off the energy of sex, not the act itself. In that, I don’t actually require the physical stimulation.)
Popularity among social circles is also something that’s always piqued my interest, as has fashion, status, leadership, charisma, introverts, violence, and a host of other shit.
In contrast, I’ve inquired with a number of persons I know to list off adjectives about the friends they keep. Not all, but many are stumped and leave me with answers such as ‘They’ve just always been my friends,’ or vague miscellanies like ‘She’s just such a good person.’ I’m not saying that there aren’t good reasons to befriend these individuals, but there seems to be a lack of narrative to both identify and contemplate the Why. This brings me back to a lack of awareness of the self.
Which makes me wonder what activities might bring about this awareness?
While I am fond of people thinking in their own terms, I also believe words act as stepping stones to provide ground for new ideas to be explored and traversed. As is put forth in the Gospel of Philip:
Truth made names in the world,
and without them we can’t think.
Truth is one and is many,
teaching one thing through the many.
I am thinking promoting honest storytelling and dialogue amongst people is gonna be one of the first steps to developing subjective awareness. Perhaps difficult in America, the Land of Hollywood and TV, where stories are told for you, rather than by you. And us Canadians are no better, don’t think I’m not shaking my head at myself here.
I know I got more to think on, but I just wanted to get this out as I ponder away for the coming weeks. Little tidbits of random thought… | <urn:uuid:6c61b06d-acad-4430-bd1b-966b85834596> | {
"date": "2019-11-11T19:24:54",
"dump": "CC-MAIN-2019-47",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496664437.49/warc/CC-MAIN-20191111191704-20191111215704-00056.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9573751091957092,
"score": 2.625,
"token_count": 3323,
"url": "http://www.technoccult.net/tag/linguistics/"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.