text
stringlengths 277
230k
| id
stringlengths 47
47
| metadata
dict |
---|---|---|
The conversation around mental health is changing. Mental health issues need to be discussed year round, but May is a time that highlights those issues and raises awareness. Mental Health Awareness Month was created to raise awareness of mental health issues for millions of Americans through media, local events and mental health screenings. It’s also a time for people to come together and advocate for more research, prevention and treatment.
The 2018 theme for Mental Health Month is Fitness #4Mind4Body. When we talk about health, we can’t focus solely on physical health. We have to think of the whole person in a more holistic way that encompasses not just physical health but mental health. Fitness #4Mind4Body focuses on what we can do to be fit for our futures–regardless of where are on our own journeys to health and wellness. The theme focuses on the relationships between diet and nutrition, exercise, the gut-brain connection, sleep and stress, and how they influence one another to affect overall well-being.
Poor mental health can negatively impact all aspects of your life, from self-esteem, to your performance at work or school, to your physical health, to your ability to “show up” as a good son, brother or parent. Just as it’s important for us to pay attention to what our body is telling us, it’s important to listen to what our mind is telling us too.
In the treatment community, we know how addiction and mental health influence one another, for better or worse. We know that effective treatment includes treatment for addiction and mental health, but it hasn’t always been this way.
When addiction and mental illness occur at the same time, it’s known as a co-occurring disorder or dual diagnosis, and according to a 2014 National Survey on Drug Use and Health, it’s something that 7.9 million Americans experience.
It might be Mental Health Month, but anytime is a good time to seek treatment for mental health issues, including addiction. Individualized treatment that takes a holistic approach to healing offers the most substantial chance of managing mental health symptoms and overcoming addiction. If you or someone you love is struggling with mental illness and/or addiction, Spearhead Lodge can help. Our program is structured to address the underlying factors that led to mental health issues and addiction. Contact a Spearhead Lodge Admissions Specialist by calling 1-866-905-4550. | <urn:uuid:014b7941-2f80-4ec9-85a0-2715db1395a1> | {
"dump": "CC-MAIN-2020-29",
"url": "https://www.spearheadlodge.com/may-mental-health-month/",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655886121.45/warc/CC-MAIN-20200704104352-20200704134352-00371.warc.gz",
"language": "en",
"language_score": 0.9497396945953369,
"token_count": 499,
"score": 2.515625,
"int_score": 3
} |
Hey readers, it’s been a long while. I seem to go through alternating cycles of being either extremely busy or extremely lazy. Neither of these is conducive to getting a blog entry done. But here we go with a little something the graphic designer/illustrator in me thinks is really neat.
Isometric Pixel Art.
What does this mean? Let’s break it down!
“Isometic” is a style of graphical projection. A projection is simply a method of representing a 3D object in 2D space, used everywhere from Renaissance perspective painting to architectural modeling software. Every time you draw a cube as something more than just a square on your page, you’re using a form of projection. In fact, even if you draw just a square to represent a cube, you’re still using Orthographic Projection, which repesresents just one side of a 3D object at a time.
Isometric Projection is special because every dimension is equally foreshortened. When the axes of height, width, and depth meet, the angles formed are all equal (120 degrees), like so:
All this is in the word: iso meaning “equal” and metric meaning “measurement”. It is quite common to represent 3D objects using projections in which these angles are not equal. In fact, non-isometric renderings often look more natural to many viewers as they cause less distortion to the “front” face of an object. See these three different projections of the same desk:
However, because all axes meet at equal angles, isometry is often easier to draw and also allows for interesting illustrative possibilities, including optical illusions like one found on this Swedish postage stamp:
This illusion is only possible due to the equally foreshortened axes of an isometric projection. Many of M.C. Escher’s famous illusions were achieved through the juxtaposition of placing an isometric object against a non-isometric background.
Because isometry is built around a very specific set of angles, it actually represents the view from a precise camera angle above and in front of the object. Knowing this, a photographer may actually place him/herself at this exact location in relation to the subject, and replicate isometry through photography, as in this real photograph of a corner in NYC:
This is not an illustration, and the photograph has not been altered in anyway. The artificial, toy-like “computer-gamey” feel is due purely to our cultural associations of isometric projections with artificial illustrations. This association is so strong that our minds actually read real photographs of isometry as some sort of replica or computer graphic. This brings us to one of the prominent uses of isometric projection in our culture: computer game graphics.
Before the dawn of complex 3D physics engines, early low-power gaming systems needed to represent 3D environments in a simple, yet passably realistic manner. Enter: Isometric projections! The classic SimCity is a great example of this:
These early computer games (and any other software that required graphics) did not have access to nearly as much computing power as we do today. Images and gaming environments were simplified by limiting the color palette, resolution, interactivity, and level of detail. In these simplified low-resolution images, the pixels, small grid squares of color that make up all digital images, were still quite visible:
These are both screenshots of video game aliens. The left character hails from 1978’s Space Invaders, is 10 pixels high, and is rendered in 1-bit color (each pixel is either ‘on’ or ‘off’). Next to him is a direct descendent from nearly 30 years in the future, a creature from 2007’s Halo 3. He, in contrast, is 389 pixels high and is modeled in the 16,777,216 colors of 32-bit graphics.* Both images are essentially the same: a grid of pixels tuned to specific color values. The difference is the number of pixels in each image (resolution) and the number of colors each pixel can be tuned to (color depth). The rapid advances in computing speed and memory capacity in the past 30 years are all that separate the generations.
However, a glance into any Urban Outfitters will reveal, through racks of Transformers T-shirts and cassette-tape belt buckles, our youth culture’s obsession with appropriating the aesthetic of the 70’s, 80’s, and early 90’s. While this is largely irony-driven, the nostalgia of first-generation gamers who are now reaching adulthood is sincere. Whatever the motivation, there has been a huge interest in creating art and graphics that return to that blocky, bright-colored, low-fi look of early video games. Hence the rise of “pixel art.” Simply Google the term and you’ll find the trend
This spark in interest is also fueled by the accessibility of pixels as an art form. Although many artists use the Adobe Creative Suite for their creations, the very simplest (and cheapest) of raster image-editing programs are quite sufficient, including MS Paint and a number of freeware programs like Pixen:
So what makes something pixel art? While it is technically true that all images created on a computer are made of pixels, “pixel art” generally refers to illustrations in which the individual pixels are intentionally visible and are not blended to create smooth gradients and shadows (think Space Invaders, not Halo). Blockiness is the desired aesthetic, and pixel artists specifically avoid the anti-aliasing capabilities of most image-editing software. Additionally, pixel art involves the painstaking dot-by-dot manipulation of each individual pixel, making it much more similar to mosaic than painting. Most pixel art also embraces a cartoon-like form in which each shape is outlined by a black line, and filled with a solid, bright, color.
So, as pixel art has been seized by the internet zeitgeist for it’s easy-to-make retro charm, isometry has come with it. Most pixel art made in recent years has used this specific projection. The grid-like nature of pixel drawings also lends itself well to the mathematical manipulations needed to convert a 3D image into a 2D isometric one. To create the 30 degree and 60 degree lines key to isometry, game designers and pixel artists use a stair-step system of 2 pixels over-1 pixel up, 2 pixels over-1pixel up, like this:
Do the trigonometry, [arctan(1/2)], and you’ll see this arrangement actually creates lines of 26.6 degrees and 63.4 degrees instead of the true 30 and 60. However this 1-over-2 stair-stepping creates an easy-to-use, smooth line. A true 30 degree line made of pixel blocks is messy and jagged. So, technically, isometric pixel art only employs a close approximation of isometry.
So while the building blocks and methods of creating isometric pixel art are simple and easily taught, artists can create some amazingly complex and beautiful works of art and illustration. In recent years, the style has leapt from the online art community to appear in mainstream advertisements for Coca-Cola, Bell Telephone, and Adidas as well as in print illustrations for magazines Fortune, Wired, and Popular Science. You’ve probably seen a few.
Arguably the current godfather of the isometric pixel art world, and certainly the most commercially exposed, is the illustration firm eBoy. Their signature style creates whimsical toy-like portraits of world cities, such as Baltimore, Singapore, and Dublin. In fact, cityscapes seem to be the most popular subject for isometric pixel drawings, and I can’t help but wonder how much of this is directly inspired by SimCity’s early use of the form.
Just to wrap things up, here’s a short gallery of some of the neat things that illustrators have been creating with IPA:
Outside of the large design firms, amateur pixel artists have united in a number of great collaborative projects, in which each contributor draws their own block of a massive city, or floor of a high-rise tower
The website of the city of Washington DC has seized on another great application with an interactive isometric pixel map to help tourists navigate the city in friendly 3D format.
And finally, here’s a little something I made, just to try my hand at the style. This was made in Photoshop CS2 and took about 30 min. Maybe I’ll populate it with some details and characters sometime.
Food for Thought: What sort of products/services lend themselves best to advertisements featuring IPA illustrations? What sort of associations does this style of ad connote for a product/service? Due to the readily available tools and universal grid format, IPA lends itself to massive collaborative projects, do you know of other forms of art that allow for this level of democratic cooperation?
Continuing Research: I’d like to follow this trend, to see if it takes off and appears in increasingly widespread and mainstream venues or if it becomes proves a forgettable few-year fad, like so many items of the fickle internet zeitgeist.
*Well, it was originally in 32-bit color. Depending on your computer, the actual color space you’re seeing on this page probably has fewer bits. But you get the idea.
Well, there’s my epic 4th entry. I promise future entries will never be this long or far between…I don’t have the time to write them, and I’m sure few of you have the endurance to read them. My plan for TBIAM is to update much more frequently, with much shorter entries…you know, like a normal person’s blog. So expect quick “I was just thinking about this the other day” blurbs…and keep your eyes on this page, I haven’t abandoned it yet! In fact, I already have some ideas lined up for future exhibits. | <urn:uuid:7c1c6cd5-61b1-4088-b090-60ff7d919c98> | {
"dump": "CC-MAIN-2017-30",
"url": "https://thatbelongsinamuseum.wordpress.com/",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549426629.63/warc/CC-MAIN-20170726202050-20170726222050-00224.warc.gz",
"language": "en",
"language_score": 0.9412584900856018,
"token_count": 2102,
"score": 2.609375,
"int_score": 3
} |
Part 5 of a Tribune investigation unearths documents showing that decisions by the U.S. military and chemical companies that manufactured the defoliants used in Vietnam made the spraying more dangerous than it had to be. Complete coverage >>
As the U.S. military aggressively ratcheted up its spraying of Agent Orange over South Vietnam in 1965, the government and the chemical companies that produced the defoliant knew it posed health risks to soldiers and others who were exposed.
That year, a Dow Chemical Company memo called a contaminant in Agent Orange "one of the most toxic materials known causing not only skin lesions, but also liver damage."
Yet despite the mounting evidence of the chemical's health threat, the risks of exposure were downplayed, a Tribune review of court documents and records from the National Archives has found. The spraying campaign would continue for six more years.
Records also show that much of the controversy surrounding the herbicides might have been avoided if manufacturers had used available techniques to lessen dioxin contamination and if the military had kept better tabs on levels of the toxin in the compounds. Dow Chemical knew as early as 1957 about a technique that could eliminate dioxin from the defoliants by slowing the manufacturing process, according to documents unearthed by veterans' attorneys.
Since the Vietnam War, dioxin has been found to be a carcinogen associated with Parkinson's disease, birth defects and dozens of other health issues. Thousands of veterans as well as Vietnamese civilians were directly exposed to the herbicides used by the military.
Debilitating illnesses linked to defoliants used in South Vietnam now cost the federal government billions of dollars annually and have contributed to a dramatic increase in disability payments to veterans since 2003.
Documents show that before the herbicide program was launched in 1961, the Department of Defense had cut funding and personnel to develop defoliants for nonlethal purposes. Instead it relied heavily on the technical guidance of chemical companies, which were under pressure to increase production to meet the military's needs.
The use of defoliants led to massive class-action lawsuits brought by veterans and Vietnamese citizens against the chemical firms. The companies settled with U.S. veterans in the first of those suits in 1984 for $180 million.
Since then, the chemical companies have successfully argued they are immune from legal action under laws protecting government contractors. The courts also found that the military was aware of the dioxin contamination but used the defoliants anyway because the chemicals helped protect U.S. soldiers.
A 1990 report for the secretary of the U.S. Department of Veterans Affairs found that the military knew that Agent Orange was harmful to personnel but took few precautions to limit exposure. The report quotes a 1988 letter from James Clary, a former scientist with the Chemical Weapons Branch of the Air Force Armament Development Laboratory, to then- Sen. Tom Daschle, who was pushing legislation to aid veterans with herbicide-related illnesses.
"When we initiated the herbicide program in 1960s, we were aware of the potential for damage due to dioxin contamination in the herbicides," Clary wrote. "We were even aware that the 'military' formulation had a higher dioxin concentration than the 'civilian' version due to the lower cost and speed of manufacture. However, because the material was to be used on the 'enemy,' none of us were overly concerned."
Military scientists had been experimenting with herbicides since the 1940s, but funding cuts in 1958 left few resources in place to fully evaluate the chemicals for use in Vietnam.
"I was given approximately 10 days notice to come to Vietnam to undertake 'research' in connection with the above tasks," wrote Col. James Brown of the U.S. Chemical Corps Research and Development Command in an October 1961 report to top brass just as the defoliation program was ramping up. "Thus, a large order was placed on a very poorly supported research effort."
The military launched a limited herbicide program in 1962 that involved 47 missions. At the time, relatively little was known about the health effects of dioxin, in part because cancer and other illnesses can take decades to develop and the herbicides had only been in wide use since 1947.
But documents uncovered by veterans' attorneys show the chemical companies knew that ingredients in Agent Orange and other defoliants could be harmful.
As early as 1955, records show, the German chemical company Boehringer had begun contacting Dow about chloracne and liver problems at a Boehringer plant that made 2,4,5-T, the ingredient in Agent Orange and other defoliants that was contaminated with dioxin.
Unlike U.S. chemical companies, Boehringer halted production and dismantled parts of its factory after it discovered workers were getting sick. The company studied the problem for nearly three years before resuming production of 2,4,5-T.
In doing so, the company found that dioxin was the culprit and that they could limit contamination by cooking the chemicals at lower temperatures, which would slow production.
In response to questions from the Tribune, Dow said it didn't purchase the proprietary information on the technique until 1964 and didn't start using it until 1965. Records show it did not inform other manufacturers or the government about the technique until the military began planning construction of its own chemical plant to make herbicides in 1967.
By that time, Dow also had developed a procedure to test dioxin levels in batches of 2,4,5-T. The company provided that technique to other companies in 1965 but not to the military until 1967, the company said.
Earlier in the decade, nearly two dozen military officials and chemical industry scientists met in April 1963 to issue a "general statement" about the health hazards from 2,4-D and 2,4,5-T. No one raised concerns about using the chemicals in Vietnam, according to minutes from the meeting.
Evidence focused largely on the fact that more than 300 million gallons of the compounds had been used domestically since 1947, even though the formulations for Vietnam would be far more concentrated and contain more dioxin.
"The committee concluded that no health hazard is or was involved to man or domestic animals from the amounts or manner these materials were used in aforementioned exercise," the minutes show.
Nonetheless, Dow told the Tribune it had been sharing information about health issues with the military. "In fact, the chemical manufacturers, including Dow, were in dialogue with the U.S. government regarding the potential hazards of chloracne in production workers beginning as early as 1949 and continuing through the 1960s," Dow spokesman Peter Paul van de Wijs said in a written response.
In 1965, the chemical companies involved in producing the defoliants met at Dow's headquarters in Midland, Mich., to discuss the contaminant's threat to consumers.
"This material (dioxin) is exceptionally toxic; it has a tremendous potential for producing chloracne and systemic injury," Dow's chief toxicologist, V.K. Rowe, wrote to the other companies on June 24, 1965.
But none of the companies informed the military personnel charged with overseeing the defoliation contracts of the safety concerns until late 1967, according to depositions from the lawsuits.
Internal documents from multiple companies indicate they were worried about the specter of tighter regulation.
Only after a study for the National Institutes of Health showed that 2,4,5-T caused birth defects in laboratory animals did the military stop using Agent Orange, in 1970.
Alan Oates, a Vietnam veteran who chairs the Agent Orange committee for Vietnam Veterans of America, said veterans have had little luck in their legal fight for compensation since the 1984 settlement.
Veterans have argued unsuccessfully in court that the settlement was insufficient because it came too early for thousands of people whose illnesses did not develop until after all the settlement money had run out.
One unresolved issue, Oates said, is whether chemical companies can be held liable for health costs associated with birth defects seen in the children of Vietnam veterans. "Now that it's starting to show it has an impact on future generations, what is the recourse for those folks?" Oates said. | <urn:uuid:e1e80140-4d00-4212-a73d-a0c673a93bdb> | {
"dump": "CC-MAIN-2019-22",
"url": "https://www.chicagotribune.com/lifestyles/ct-xpm-2009-12-17-chi-agent-orange-dioxindec17-story.html",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232256571.66/warc/CC-MAIN-20190521202736-20190521224736-00246.warc.gz",
"language": "en",
"language_score": 0.9756615161895752,
"token_count": 1674,
"score": 2.953125,
"int_score": 3
} |
Cavities are on the top of the list of most people’s dental concerns. With proper oral hygiene and biannual cleanings and checkups, you stand a good chance of living your life with relatively few cavities, if any at all. They do happen, however, so don’t panic if your Des Moines IA dentist, Dr. Steve Burds tells you that you have one at your next six month checkup. To learn more facts about cavities, try your hand at the following true-or-false quiz.
Q1. True or False: Sugar causes tooth decay.
Q2. True or False: The first sign of a cavity is a toothache.
Q3. True or False: Tooth extraction or loss is the likely outcome of untreated tooth decay.
Q4. True or False: The bacteria that cause cavities are in our mouth from birth.
A1. FALSE – While it’s true that sugar contributes to tooth decay, cavities are actually caused by Streptococcus mutans (AKA S. mutans). This strain of oral bacteria produces lactic acid when it feeds on sucrose. The acid, and more bacteria mix with food debris to create plaque. Eating a whole foods diet with only natural sugars will help, along with good oral hygiene, to keep cavities at bay.
A2. FALSE – Not necessarily, because cavities don’t always exhibit symptoms. If you do have pain or discomfort it generally results from very advanced stages of decay. When the dentin beneath your tooth enamel begins to rot, your nerves become exposed which is what would cause a pain sensation.
A3. TRUE – Fillings suffice for most mild to moderate cavities, but severe and advanced tooth decay can require more invasive procedures, and even extraction. When infection from tooth decay gets into the root of the tooth, a root canal procedure will likely be recommended. This will give your Des Moines, IA dentist the opportunity to cleanse the area and make it bacteria free. Your remaining tooth structure will be given a customized crown to reinforce and seal the area. If there’s too much decay, however, you could end up losing the tooth.
A4. FALSE – We are born with mouths that are bacteria free. S. Mutans are contagious germs. We generally get them from our parents or caregivers when we are very young babies. Kissing, sharing drinks, and blowing on food can transmit the oral bacteria from one person to the other. Pathogens are able to survive in a baby’s mouth even well before their first teeth erupt through the gums.
Fillings and Crowns from Des Moines IA Dentist
If you suffer from tooth decay, a filling or dental crown can strengthen your tooth and restore its function. Contact your 50309 dental office by calling (515) 244-9565. Located in the 50309 area, we proudly welcome patients from Des Moines, River Bend, Kirkwood Glen, East Village, and neighboring communities. | <urn:uuid:809d5d6f-6ec2-4215-806b-af120ba45e53> | {
"dump": "CC-MAIN-2023-14",
"url": "https://gatewaydentalgroup.org/2013/07/des-moines-ia-dentist-challenges-you-to-a-cavity-quiz/",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945218.30/warc/CC-MAIN-20230323225049-20230324015049-00177.warc.gz",
"language": "en",
"language_score": 0.9204563498497009,
"token_count": 629,
"score": 3.21875,
"int_score": 3
} |
Argyll beaver monitoring hampered by terrain
Efforts to monitor beavers released in Argyll are being hampered by the terrain, according to a new report.
The first animals captured in Norway were kept temporarily at the Highland Wildlife Park, near Aviemore, before being released in Knapdale in May 2009.
A total of 15 were introduced during 2009, but the report said only nine were believed to still be alive in the release area by June 2010.
Steep and heavily wooded terrain has caused problems tracking the beavers.
Researchers said some tags attached to beavers' tails were lost within two or three months of their release.
The tags helped monitor the animals using a system known as radio-telemetry.
Even when tags were on the animals, the report's authors said the landscape challenged this method of monitoring.
The first annual report on the Scottish Beaver Trial was commissioned by Scottish Natural Heritage (SNH).
The researchers said Knapdale's thick woods and steep ridges between its lochs and river systems presented "serious difficulties" to the transmission of radio signals.
End Quote SNH spokesman
The first year of the project was used to refine the methodology of what was a complex monitoring effort”
The report said: "The most significant monitoring difficulties during the first year of the trial resulted from the use of radio-telemetry methods, and difficulties presented by the terrain and vegetation at the release site."
Trapping beavers from a boat, the preferred method for counting the animals in Norway, was also done in the first year of the project.
The report said trapping by boat would continue, but would be carried out after the welfare of the animals was taken into consideration.
A total of 15 beavers were released during the first year of the trial.
The annual report said two deaths were recorded in the wild, while a third animal that had been withdrawn from the programme died in captivity. All three were males.
Three females were classified as "missing" at the time of the writing of the report.'Settled in'
However, the Scottish Beaver Trial reported in January 2011 that Knapdale has 12 beavers following the birth of two kits and further introductions.
Simon Jones, Scottish Beaver Trial Project Manager, said: "Our first beaver families were released two years ago as part of a five-year trial reintroduction, and are now very settled in their new home in Knapdale.
"Two beaver kits were born last year and we're hoping to see new kits emerging from their lodges this year in June or July.
"Due to the nature of wildlife projects and the terrain at Knapdale there have been some inevitable challenges which are to be expected, but we are very satisfied with how the trial, which is the first formal reintroduction of a native mammal to the UK, is progressing."
SNH said the difficulties posed by the terrain was not a surprise.
A spokesman added: "The first year of the project was used to refine the methodology of what was a complex monitoring effort.
"Most of the recommendations in the report are already being implemented." | <urn:uuid:b3129b2d-6133-426e-8a07-5ba7cf4f56da> | {
"dump": "CC-MAIN-2013-20",
"url": "http://www.bbc.co.uk/news/uk-scotland-glasgow-west-13572348",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368703532372/warc/CC-MAIN-20130516112532-00062-ip-10-60-113-184.ec2.internal.warc.gz",
"language": "en",
"language_score": 0.9866149425506592,
"token_count": 647,
"score": 2.828125,
"int_score": 3
} |
Intelligent and opportunistic, raccoons are one of the most common nuisance wildlife species handled by animal control companies. They aid in controlling plant and pest populations, preferring a diet consisting mainly of insects, plant matter, and small vertebrates. But ever-expanding urban and suburban development is leaving fewer and fewer natural raccoon habitats, forcing this species to turn to alternative food and shelter sources for survival.
Raccoons are a perfectly normal part of our environment, and encountering one outside on your property is no cause for panic. But if you experience a continuing problem with trash can raiding or suspect animals may be denning in your home, you’ll probably want to know how to get rid of raccoons.
How to Get Rid of Raccoons
Secure Trash Cans
Raccoons inhabiting areas of dense human population tend to forage in dumpsters and trash cans since garbage is a convenient food source. For homeowners, this often leads to a front lawn covered in ripped trash bags and debris. To prevent raccoons from infiltrating your garbage, keep the cans in a bracketed frame so they cannot be tipped over, and secure the lid using bungee cords.
Raccoons can also ruin lawns, gardens, and flower beds while digging for insects. If you notice that portions of your landscaping have been torn out of the ground, treating the soil for grubs can help prevent a continuing long-term raccoon problem.
Because raccoons are excellent climbers, bird feeders are another common source of nourishment. If you’re concerned about how to get rid of raccoons that are accessing your bird feeders, try adding some hot chili powder into the seed mix. Mammals are sensitive to the heat-inducing properties of the capsaicin in the pepper while birds don’t contain the taste receptors necessary to process that sensation. Raccoons and squirrels will be repelled, and birds won’t notice a difference. However, if this problem persists, it may be necessary to relocate or remove the feeders.
In the wild, raccoons avoid open, grassy areas preferring a habitat with an abundance of trees. This provides them with a means to climb up and away from danger as well as ample denning locations in hollows and crevices. In urban and suburban settings, these climbing and denning instincts lead raccoons to breech the attics and chimneys of structures in search of a safe living space. They will also nest beneath structures in the abandoned dens of other animals.
To prevent raccoons from entering your home or business, ABC Wildlife inspects the exterior of the building for any holes or crevices through which they may gain access. Raccoons require an opening only as large as their head to breech a structure. We cover chimneys and roof vents with guards that are specially designed to prevent the entry of nuisance animals. ABC Wildlife also replaces damaged shingles where raccoons may be able to tear through the roof.
Raccoons in Your Home
If you detect evidence of raccoon activity in your home, contact the nuisance wildlife specialists at ABC Wildlife immediately. Raccoons carry a variety of health risks including leptospirosis, raccoon roundworm, and rabies and should not cohabitate with humans.
Indicators of raccoon presence in a home include loud scuffling or thumping, shingles torn off the roof, muddy paw prints up the gutter, bent or mangled vent caps, and if there are babies present, chirping or squeaking noises. Our animal control technicians will be able to identify the entry point and work with you to create an effective and efficient plan to remove the raccoons from your home as well as provide you with solutions to reduce the likelihood of future animal breech. We know exactly how to get rid of raccoons, and we’re ready and eager to help you with your problem today. Call (847) 870-7175 to speak with one of our friendly representatives!
Sharing is caring! If you’ve learned something from what you’ve read, please click one of the icons below to share this post on social media.
Vito Brancato is a wildlife specialist and educator with over 15 years of animal and pest management experience. He is a certified Wildlife Control Operator through the National Wildlife Control Operators Association and belongs to the National Pest Management Association and the Illinois Pest Control Association. He is an avid beekeeper and nature enthusiast.
Image courtesy of Alan Vernon via Creative Commons license on Flickr. | <urn:uuid:0e805cdc-e74b-461c-ad0c-1b46dbb2c7bb> | {
"dump": "CC-MAIN-2021-04",
"url": "https://abcwildlife.com/blog/how-to-get-rid-of-raccoons/",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703524858.74/warc/CC-MAIN-20210121132407-20210121162407-00232.warc.gz",
"language": "en",
"language_score": 0.9198380708694458,
"token_count": 952,
"score": 2.96875,
"int_score": 3
} |
A Federal judge has ruled that the Minnesota Dept. of Natural Resources will have to take additional steps to protect the endangered Canada lynx from trapping.
The Animal Protection Institute and the Center for Biological Diversity sued the DNR claiming trappers were catching and killing the endangered wildcats.
Marc Fink of Duluth, an attorney for the Center for Biological Diversity, said the ruling is a victory for the lynx.
"Well it's a great victory for lynx," said Fink. "It means that they need to be protected from trappers, just as the Endangered Species Act provides for. Once a species is listed as threatened or endangered, the state should have taken steps immediately to protect that species. In this case, unfortunately, it's taken a number of years and a lawsuit to compel the state to do so."
The DNR has until the end of April to present the court with a plan for keeping lynx out of traps.
The DNR could chose to restrict some kinds of traps or the size of some traps in those parts of Minnesota where the Canada Lynx might be found. | <urn:uuid:a091ab7f-8a67-4722-a85b-664497171d21> | {
"dump": "CC-MAIN-2016-18",
"url": "http://www.mprnews.org/story/2008/04/01/lynx",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461864121714.34/warc/CC-MAIN-20160428172201-00050-ip-10-239-7-51.ec2.internal.warc.gz",
"language": "en",
"language_score": 0.9667155146598816,
"token_count": 226,
"score": 2.8125,
"int_score": 3
} |
Long before the invention of Photoshop, artists were creating trippy and groovy fake images — and a new exhibition at the Metropolitan Museum of Art shows how. "Faking It: Manipulated Photography Before Photoshop" opens at the Museum on Oct. 11, and the New York Daily News has a collection of some of the most surreal and insane fake photos.
The techniques used to create these images include multiple exposure on a single negative, and printing a single print from multiple negatives. Says curator Mia Fineman, "There is this whole counter-tradition that is not about creating an accurate, trust-worthy image of the world, but about using the camera and dark-room techniques to manipulate the truth."
Check out a few of our favorite images below, and tons more at the New York Daily News site.
Top image: ‘Room With Eye' (1930) by Maurice Tabard (1897–1984)
‘Dream No. 1: ‘Electrical Appliances for the Home' (1948) by Grete Stern (1904-1999)
‘Two-Headed Man' (1855) by unidentified American artist
‘Man on Rooftop with Eleven Men in Formation on His Shoulders' (ca. 1930) by unidentified American artist
‘Man Juggling His Own Head' (ca. 1880) by unidentified French artist
‘Hearst Over The People' (1939) by Barbara Morgan (1900-1992)
‘Dirigible Docked on Empire State Building, New York' (1930) by unidentified American artist | <urn:uuid:93e4fb0d-4596-4f16-a52a-b86cd4f9d75a> | {
"dump": "CC-MAIN-2020-40",
"url": "https://io9.gizmodo.com/how-did-we-fake-photos-before-photoshop-5942658",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400187899.11/warc/CC-MAIN-20200918124116-20200918154116-00106.warc.gz",
"language": "en",
"language_score": 0.9030991196632385,
"token_count": 330,
"score": 2.640625,
"int_score": 3
} |
The attacks on the World Trade Center and the Pentagon have changed our world. For many, the fear and shock have resulted in feelings of hatred and a desire for revenge. In Forgiveness, Michael Henderson documents the path that led to forgiveness and reconciliation for many conflicts of the twentieth century. He tells the story of remarkable people from many nations and different faiths, who--in the most painful circumstances--have broken the chain of hate. From these examples, we can find new ways of working together along the fault lines of history. | <urn:uuid:9fd36476-0fb4-4e48-95a4-2fcb38096168> | {
"dump": "CC-MAIN-2018-13",
"url": "http://www.barclaypressbookstore.com/Bargain-Books/forgiveness.html",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257648103.60/warc/CC-MAIN-20180322225408-20180323005408-00547.warc.gz",
"language": "en",
"language_score": 0.9242703318595886,
"token_count": 106,
"score": 2.703125,
"int_score": 3
} |
- Filed Under
Without any federal pressure, Wicomico County should review whether or not a quarter-century-old voting-district system is fair to all.
If anyone is uncertain whether Wicomico Countyís demographics are shifting, take a look at 2010 census data.
One-in-three county residents were people of color when that count was taken three years ago. Just 20 years earlier, less than one-in-four people were minorities.
Yet the Wicomico County Council election system put into place a quarter-century ago has failed to yield more than one African-American member. That system came into being as the U.S. Department of Justice pressed for a fairer system; previously, five at-large council seats failed to yield any black council members.
While one-in-seven African-Americans is better than zero-in-five, minority representation still lags significantly the presence of people of color in the local population. The numbers are simple: 33 percent of Wicomico County residents are people of color, but only 14 percent of council members are.
The ACLU last week pressed the federal government to reopen its Voting Rights Act challenge to the councilís structure. The Justice Department had wanted more changes than what Wicomico County made in the late 1980s, but federal courts would not require further steps then.
The court felt more time was needed to assess the impact of a seven-member council with two at-large seats. By 2013, that impact is more than clear: Minorities remain underrepresented in county government.
Our nation, and the Eastern Shore, are going to become only more diverse. Without the need for federal intervention, Wicomico County leaders should take it upon themselves to reorder council districts so that they best represent all of us. | <urn:uuid:d07e162f-ca87-408f-b0b4-a84ce113f188> | {
"dump": "CC-MAIN-2015-35",
"url": "http://www.delmarvanow.com/article/A7/20130513/OPINION/305130001/OPINION-Wicomico-County-should-revisit-voting-districts",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645191214.61/warc/CC-MAIN-20150827031311-00031-ip-10-171-96-226.ec2.internal.warc.gz",
"language": "en",
"language_score": 0.9642139673233032,
"token_count": 375,
"score": 2.65625,
"int_score": 3
} |
Presenter: Mathew Needleman
Location: Los Angeles, CA
Presentation Description: As mobile devices take over we have to decide between replicating traditional classroom work with digital flashcards or infusing our classrooms with creativity. Mathew Needleman describes how taking pictures with an iPhone sparked a personal creative renaissance and how this might occur in our classrooms.
Link to presentation’s supporting documents:
Follow Mathew Needleman on Instagram @needleworks:
On this day..
- K12Online14 Day Eight: October 29, 2014 - 2014
- Connected Learners Need Connected Leaders - 2014
- Full Steam Ahead! - 2014
- Genius Hour Passion Projects - 2014
- Coding/Making/Writing with Connected Learners - 2014
- K12Online13 Day 7 Presentations 29 October 2013 - 2013
- T3:Triple Threat in Tech: Art, Music, and Media - 2013
- Mindfulness and Neuroscience: How a Contemplative Curriculum for K-12 Students can Improve Focus, Connectedness, and Self-Regulation - 2013
- A Story + A Green Wall = 1 Amazing Transformed Digital Story.... - 2013
- Learn how to create a WordPress Website in less than 20 minutes - 2013 | <urn:uuid:d251819c-ec92-4568-b358-b92a68a723b2> | {
"dump": "CC-MAIN-2015-22",
"url": "http://k12onlineconference.org/2012/10/29/kicking-it-up-a-notch-keynote-its-not-about-the-apps/",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207928019.31/warc/CC-MAIN-20150521113208-00010-ip-10-180-206-219.ec2.internal.warc.gz",
"language": "en",
"language_score": 0.7875614166259766,
"token_count": 263,
"score": 2.796875,
"int_score": 3
} |
Chronic leg ulcers are affecting approximately 6.5 million Americans (1) and the disorder includes venous stasis ulcers, arterial ulcers, pressure ulcers, and diabetic (neuropathic) ulcers. They are associated with significant morbidity, reduced quality of life, and high treatment costs. Annually, over $25 billion is spent in the USA on chronic ulcer treatment, and that burden is steadily growing due to an aging population and rising incidences of both diabetes and obesity (1). Most chronic ulcers have underlying concomitant vascular diseases at both macro- and microvascular levels, which lead to ischemia and delayed or failed healing. In fact, recent studies revealed that critical ischemia-related ulcers are the most severe ulcers and these patients have the longest hospital stays and healing times (2). The 5-year mortality rate is comparable to common types of cancer (3).
One highly effective intervention to chronic ulcers is revascularization surgery, which restores in-line arterial blood flow to ulcers (4). Over the last decade, the angiosome concept has become widely accepted for guiding revascularization. However, in certain patients, particularly diabetics or those with peripheral artery disease (PAD), perfusion to one vessel may be normal, while perfusion to an area of tissue loss may be inadequate (4,5). The ability to detect changes after revascularization and determine whether perfusion has been adequately restored would significantly benefit vascular surgeons and specialists caring for patients with tissue loss in their feet (6).
However, current clinical tests for blood perfusion fail to meet this need. As shown in Table 1, generalized assessment approaches, such as ankle-brachial pressure index and toe blood pressure readings do not provide information about blood perfusion at a specific region of tissue. They are also subject to errors or limitations due to the inaccurate handhold Doppler device, vessel calcification, and loss of digits (7). Tissue level assessment techniques include photoplethysmography (PPG) and transcutaneous oxygen pressure (TcPO2). However, PPG readings cannot be specific to a single vessel due to the nature of light diffusion and the result is susceptible to local skin conditions and reflex veins (8). TcPO2 requires a lengthy acquisition time (>45 minutes) and the reading is affected by cellulitis or significant foot edema (9). Doppler ultrasound and X-ray CT angiogram have also been used in vascular clinics. However, their primary roles are in the assessment of proximal vessels and these technologies are rarely used in evaluating distal perfusion, due to their low sensitivity to small vessels (Doppler ultrasound) or ionizing radiation (X-ray CT) (9). Recently, various near-infrared imaging techniques have been proposed in wound assessment (10,11). Among them, ICG-based near-infrared fluorescence angiography is probably the most widely studied (9). Upon intravenous injection of ICG, fluorescence images of ICG in tissue can be used to inform tissue perfusion and blood flow. However, the technique has a relatively low rate of clinical adoption, possibly because of two limitations: (I) need for contrast injection, which increases patient discomfort; and (II) light diffusion, which degrades the spatial resolution and imaging depth (2).
To address these challenges, we proposed a three-dimensional (3D) high-resolution, deep-tissue wound assessment system based on photoacoustic tomography (PAT). PAT is an emerging hybrid imaging modality that provides ultrasonic detection of optical absorption in tissue through the photoacoustic effect. The conversion of optical energy into acoustic energy breaks through the optical diffusion limit, enabling deep-tissue optical absorption to be visualized at a high spatial resolution (12,13). Because blood vessels contain higher concentrations of hemoglobin, PAT can noninvasively map vascular distribution without using any exogenous contrast agents (14,15).
While various PAT systems have been developed for different applications, there are limited reports on a PAT system that is tailored for foot imaging. Here, we introduce our first portable PAT foot imaging system developed based on a custom-made linear transducer array. Two versions of systems were developed. One is built based on bulky laser and ultrasound systems for imaging healthy volunteers in the university lab, while the other is a miniaturized, cart-based system for patient imaging in the vascular clinic. The two systems share the same ultrasound transducer array and fiber bundle for acoustic detection and optical illumination, respectively.
The proposed system (Figure 1A) consists of a custom-made waterproof linear transducer array (IMASONIC, France), a DAQ system, and a Q-switched Nd:YAG laser. The customized waterproof transducer array has 128 elements with 8.6 cm lateral width. The central frequency and focal length of the transducer are 2.25 MHz and 4 cm, respectively. The DAQ unit used for lab testing has 128 channels (Vantage-128, Verasonics Inc.) with 14-bit digital resolution, 54 dB gain, and up to 64 MHz sampling rate. The portable DAQ system (WinProbe, UltraVision Research Platform) (16) used for clinical testing has 64 transmit and receive channels, 14-bit digital resolution, 54 dB gain, and up to 40 MHz sampling rate. Overall, the two systems have comparable acquisition parameters, except that the Winprobe system requires two laser pulses to capture 128 channel data. The lab unit uses a flash-lamp-pumped ContinuumTM Nd:YAG laser with 8–10 ns pulse width, 10 Hz pulse repetition frequency (PRF), and up to 800 mJ output power. The portable, consumer-grade Nd:YAG laser has 6–10 ns pulse width, 6 Hz PRF, and up to 200 mJ output power. Both lasers output 1,064 nm wavelength, which can penetrate deep in biological tissue (17). For all human experiments, the light intensity on skin surface was measured to be less than 21 mJ/cm2, which is well below the ANSI safety limit of 100 mJ/cm2 for 1,064 nm light (18).
As shown in Figure 1A, we used a water tank to couple the foot and the transducer array. The water tank has an opening at the bottom, which was sealed by a 0.05 mm thickness fluorinated ethylene propylene (FEP) plastic film (85905K64, McMaster-Carr). FEP film was chosen here due to its good ductility and negligible acoustic and optical attenuation (as verified experimentally through pulse-echo and optical transmission experiments). The foot was imaged through this film window. As for the light illumination, we adopted the single-reflector illumination method as shown in Figure 1B (19). This method effectively achieves co-planar light illumination and acoustic detection. Our double-reflector method could have achieved the same co-planar effect (20), but it was not implemented in this study because the transducer is already waterproof and can be fully immersed in water. The transducer and the fiber bundle were combined by a 3D printed holder. A dichroic mirror (TECHSPEC® cold mirror, Edmund Optics) was inserted in the middle with a 45-degree angle to the transducer (Figure 1B). The dichroic mirror allows for 97% of 1,064 nm light to pass through at 45-degree incident angle. It also reflects the acoustic wave by 90-degrees. For light delivery, we used a fiber bundle with a 1 cm-diameter circular input and 9 cm-length line output (Dolan-Jenner Industries). During the experiment, the transducer array and the fiber bundle were both submerged in water and moved simultaneously along the scanning direction.
As shown in Figure 1C, the foot was supported by a tiltable platform, so that the foot surface can be aligned with the imaging window. All system components fit in a portable cart. For different subjects, we determine the scanning time based on the length of the foot and a step size of 0.1 mm per laser pulse. For example, scanning the linear array over 1 cm along the elevational direction takes 10 seconds with 0.1 mm/pulse step size and 10 Hz laser PRF. During all human experiments, ultrasound gel and deionized water were used to minimize air bubbles. To avoid motion artifacts, we asked the subject to stay still during the less than two minutes of imaging time. The water tank and imaging platform also helped to secure the foot in place. We also ensured that the skin-to-transducer distance was around 40 mm, which is the acoustic focal length. At the focal spot, the transducer has the highest elevation resolution. After data acquisition (DAQ), we used the universal back-projection (UBP) algorithm (21) to reconstruct each 2D imaging plane and then stack multiple planes to form a 3D volume image. For better visualization, all reconstructed 3D images were projected along the axial direction of the transducer array to form a maximum amplitude project (MAP) image. Due to the weak output from the portable laser, electronic noises can be seen as horizontal stripes in the MAP image. A stripe removal algorithm was employed to remove the majority of stripe noises in the patient data (22). All human imaging experiments were performed in compliance with the University at Buffalo IRB protocol and all subjects gave informed consent for the imaging study.
The spatial resolution of the system was quantified by imaging a tissue mimic phantom. A black human hair was embedded in an agar gel. As shown in Figure 2A,B, the human hair was placed along either the lateral or the elevation direction of the transducer array for imaging. The distance between the human hair and the transducer was set to 4 cm, which is the acoustic focal length of the array. Figure 2C,D show the PA signal profile and Gaussian fitting along the lateral (y) and elevational (z) directions, respectively. The spatial resolution was quantified by the full width at half maximum (FWHM) value, which is 0.7 mm and 1.3 mm along the lateral and elevational directions, respectively. These values agree well with the transducer element pitch and numerical aperture. While the lateral resolution will not very much at different depths, the elevation resolution will degrade quickly as the object moves away from the transducer focus. This issue can be addressed through our 3D reconstruction (23) or slit-PAT technologies (24,25). However, because the purpose of this manuscript is to highlight the first human results, we did not implement those technologies in this study. The axial resolution, while not quantified in Figure 2, was around 0.5 mm. Because all future images were projected along the axial direction, the axial resolution did not play a significant role in this form of image presentation.
To test the in vivo imaging capability of our system, we imaged the right foot of a healthy volunteer. During the experiments, the subject was sitting in front of the system and ultrasound gel was applied as the coupling medium between the foot and the plastic film. The region of interest is illustrated in Figure 3A and the corresponding photoacoustic image is shown in Figure 3B. The photoacoustic image is depth encoded, with different colors representing different depths. Here, the depth represents relative axial distance to the transducer surface (after a certain amount of offset depending on the reconstruction parameters). As expected, clear vasculatures can be seen in the image, indicating good blood circulation. Because the tiltable platform changes the inclination angle of the foot, the relative vessel depth changed from 3 mm at the bottom of the image to 9 mm at the top of the image.
Clinical validation was conducted in the UBMD vascular lab. The inclusion criterion was any person 18 years of age or older with a chronic wound on the foot that is presumed to be due to arterial insufficiency or gangrene. The exclusion criteria were pregnant women and adults unable to consent. As mentioned earlier, the core setup of the clinical system was the same as the one used in healthy volunteer imaging with the exception that the laser and DAQ systems were replaced by portable alternatives. Two exemplary PA images are shown in Figure 4. Figure 4B displays the PA foot image of a 43-year-old male with diabetes and 1st and 2nd toe amputation. The yellow dashed box shows the amputation covered by the gauze. Clinical test results from Ankle Brachial Index (ABI) and photoplethysmogram all indicated adequate blood perfusion in the foot. However, the ABI result might not be accurate due to the patient’s diabetes. Toe brachial index (TBI) could not be performed at the time due to loss of digits. Moreover, none of these tests could provide direct visualization of the wound environment. In contrast, the PA image provides a clear visualization of vasculature and tissue background, which also indicates good blood perfusion. Figure 4D is the PA foot image of a 61-year-old male with PAD. Again, because of the toe loss, TBI could not be conducted. However, both ABI (0.53) and CT angiogram results indicated an ischemic condition and vessel stenosis. The PA image also exhibits weaker vascular and background signals, indicating poor blood perfusion. Similar to the healthy volunteer results, the changes in colors in Figure 4B,D are mainly caused by the inclination of the foot. Figure 4E shows a schematic drawing of the two feet. The second patient’s foot was positioned to have a flatter top surface, and thus most signals are in the blue to green region.
It should be noted that we used maximum amplitude projection to display the image. Based on the orientation and depth of the vessel, the vessel signal at a particular location might be weaker than the surrounding tissue signals. Therefore, some vessels look discontinued in the MAP image. To better quantify tissue perfusion, we calculated the vessel-to-background ratio. Because PAD causes poor blood circulation and chronic wound areas are often associated with leaking vessels, the vessel-to-background ratio will decrease in a poorly perfused tissue. The calculated vessel to background ratios are 23.9 and 15.2 in Figures 4B,D, respectively. These values agree with our observation that a well-perfused tissue has better vessel contrast.
We successfully developed a prototype PAT system for imaging foot ulcers. We demonstrated the performance of our system in both healthy volunteers and patients. Our preliminary data clearly demonstrated that PAT has a high potential for assessing circulation around the wound. We also miniaturized the system by using a portable Nd:YAG laser, which operates without an optical table. The entire system was mounted on a cart and could be easily transported between clinics. While imaging of human foot vasculature has been demonstrated by other groups (26), our study represents the first implementation based on a linear transducer array, which is significantly cheaper than curved or spherical transducer arrays. In addition, due to the low system profile, positioning of the patient’s foot is very easy and requires little effort from the patient. Most of our studies were conducted within 20 minutes (including the patient preparation time) and had little impact on the clinical workflow.
While encouraging results have been demonstrated, future developments are still needed to further improve user-friendliness and imaging capability. One limitation is that the system can image only the instep-bridge portion of the foot. However, chronic ulcers can occur in other regions, including the sole and heel; therefore, we will need to develop a more versatile system that can image other regions of the foot. The imaging speed and spatial resolution could also be improved by using a higher speed laser and a higher frequency transducer array, respectively. In addition, once a portable multi-wavelength laser becomes available, we could implement functional imaging of oxy- and deoxy-hemoglobin concentrations (27), which may provide better quantification of tissue perfusion. More patient imaging data are also needed to explore additional photoacoustic features of tissue perfusion. Based on wound research literature, other PA features of tissue perfusion could be absolute photoacoustic amplitude, blood vessel density, oxygen saturation, and vessel curvature (28-31). Quantifying these parameters will require multi-wavelength imaging, precise calibration of optical fluence, and a higher frequency transducer array. Nevertheless, with continuing development in PAT technologies, we expect that PAT systems will be widely used in vascular clinics for assessment of tissue perfusion. The technique will facilitate post-surgical decision-making and provide longitudinal monitoring of functional wound information until complete healing.
Funding: This program is supported by the National Center for Advancing Translational Sciences of the National Institutes of Health under award number UL1TR001412 to the University at Buffalo.
Conflicts of Interest: The authors have no conflicts of interest to declare.
Ethical Statement: All human imaging experiments were performed in compliance with the University at Buffalo IRB protocol and all subjects were given informed consent for the imaging study.
- Sen CK, Gordillo GM, Roy S, Kirsner R, Lambert L, Hunt TK, Gottrup F, Gurtner GC, Longaker MT. Human Skin Wounds: A Major and Snowballing Threat to Public Health and the Economy. Wound Repair Regen 2009;17:763-71. [Crossref] [PubMed]
- Mennes O, Slart R, Steenbergen W. Novel Optical Techniques for Imaging Microcirculation in the Diabetic Foot. Curr Pharm Des 2018;24:1304-16. [Crossref] [PubMed]
- Escandon J, Vivas AC, Tang J, Rowland KJ, Kirsner RS. High mortality in patients with chronic wounds. Wound Repair Regen 2011;19:526-8. [Crossref] [PubMed]
- Attinger CE, Evans KK, Bulan E, Blume P, Cooper P. Angiosomes of the foot and ankle and clinical implications for limb salvage: reconstruction, incisions, and revascularization. Plast Reconstr Surg 2006;117:261S-93S. [Crossref] [PubMed]
- Sumpio BE, Forsythe RO, Ziegler KR, van Baal JG, Lepantalo MJ, Hinchliffe RJ. Clinical implications of the angiosome model in peripheral vascular disease. J Vasc Surg 2013;58:814-26. [Crossref] [PubMed]
- Braun JD, Trinidad-Hernandez M, Perry D, Armstrong DG, Mills JL. Early quantitative evaluation of indocyanine green angiography in patients with critical limb ischemia. J Vasc Surg 2013;57:1213-8. [Crossref] [PubMed]
- Frykberg RG, Banks J. Challenges in the Treatment of Chronic Wounds. Adv Wound Care (New Rochelle) 2015;4:560-82. [Crossref] [PubMed]
- Allen J. Photoplethysmography and its application in clinical physiological measurement. Physiol Meas 2007;28:R1. [Crossref] [PubMed]
- Venermo M, Settembre N, Albäck A, Vikatmaa P, Aho P-S, Lepäntalo M, Inoue Y, Terasaki H. Pilot assessment of the repeatability of indocyanine green fluorescence imaging and correlation with traditional foot perfusion assessments. Eur J Vasc Endovasc Surg 2016;52:527-33. [Crossref] [PubMed]
- Sowa MG, Kuo WC, Ko AC, Armstrong DG. Review of near-infrared methods for wound assessment. J Biomed Opt 2016;21:091304. [Crossref] [PubMed]
- Zhang S, Gnyawali S, Huang J, Ren W, Gordillo G, Sen CK, Xu R. Multimodal imaging of cutaneous wound tissue. J Biomed Opt 2015;20:016016. [Crossref] [PubMed]
- Wang LV, Yao J. A practical guide to photoacoustic tomography in the life sciences. Nat Methods 2016;13:627. [Crossref] [PubMed]
- Xia J, Yao J, Wang LHV. Photoacoustic Tomography: Principles and Advances. Electromagn Waves (Camb) 2014;147:1-22. [Crossref] [PubMed]
- Wang Y, Li Z, Vu T, Nyayapathi N, Oh KW, Xu W, Xia J. A Robust and Secure Palm Vessel Biometric Sensing System based on Photoacoustics. IEEE Sensors J 2018;18:5993-6000. [Crossref]
- Irisawa K, Hirota K, Hashimoto A, Murakoshi D, Ishii H, Tada T, Wada T, Hayakawa T, Azuma R, Otani N, editors. Photoacoustic imaging system for peripheral small-vessel imaging based on clinical ultrasound technology. International Society for Optics and Photonics: SPIE BiOS, 2016.
- Lim HT, Matham MV. Hybrid-modality ocular imaging using a clinical ultrasound system and nanosecond pulsed laser. J Med Imaging (Bellingham) 2015;2:036003. [Crossref] [PubMed]
- Zhou Y, Wang D, Zhang Y, Chitgupi U, Geng J, Wang Y, Zhang Y, Cook TR, Xia J, Lovell JF. A Phosphorus Phthalocyanine Formulation with Intense Absorbance at 1000 nm for Deep Optical Imaging. Theranostics 2016;6:688. [Crossref] [PubMed]
- Institute ANS. American national standard for safe use of lasers. Laser Institute of America, 2007.
- Montilla LG, Olafsson R, Bauer DR, Witte RS. Real-time photoacoustic and ultrasound imaging: a simple solution for clinical ultrasound systems with linear arrays. Phys Med Biol 2013;58:N1-12.
- Wang Y, Lim RSA, Zhang H, Nyayapathi N, Oh KW, Xia J. Optimizing the light delivery of linear-array-based photoacoustic systems by double acoustic reflectors. Sci Rep 2018;8:13004. [Crossref] [PubMed]
- Xu M, Wang LV. Universal back-projection algorithm for photoacoustic computed tomography. Biomedical Optics 2005 2005:251-4.
- Münch B, Trtik P, Marone F, Stampanoni M. Stripe and ring artifact removal with combined wavelet—Fourier filtering. Opt Express 2009;17:8567-91. [Crossref] [PubMed]
- Wang D, Wang Y, Zhou Y, Lovell JF, Xia J. Coherent-weighted three-dimensional image reconstruction in linear-array-based photoacoustic tomography. Biomed Opt Express 2016;7:1957-65. [Crossref] [PubMed]
- Wang Y, Wang D, Zhang Y, Geng J, Lovell JF, Xia J. Slit-enabled linear-array photoacoustic tomography with near isotropic spatial resolution in three dimensions. Opt Lett 2016;41:127-30. [Crossref] [PubMed]
- Wang Y, Wang D, Hubbell R, Xia J. Second generation slit-based photoacoustic tomography system for vascular imaging in human. J Biophotonics 2017;10:799-804. [Crossref] [PubMed]
- Nagae K, Asao Y, Sudo Y, Murayama N, Tanaka Y, Ohira K, Ishida Y, Otsuka A, Matsumoto Y, Saito S. Real-time 3D Photoacoustic Visualization System with a Wide Field of View for Imaging Human Limbs. F1000Res 2018;7:1813. [Crossref] [PubMed]
- Yao J, Wang L, Yang JM, Maslov KI, Wong TTW, Li L, Huang CH, Zou J, Wang LV. High-speed label-free functional photoacoustic microscopy of mouse brain in action. Nat Methods 2015;12:407-10. [Crossref] [PubMed]
- Kang Y, Choi M, Lee J, Koh GY, Kwon K, Choi C. Quantitative analysis of peripheral tissue perfusion using spatiotemporal molecular dynamics. PLoS One 2009;4:e4275. [Crossref] [PubMed]
- Matsumoto Y, Asao Y, Yoshikawa A, Sekiguchi H, Takada M, Furu M, Saito S, Kataoka M, Abe H, Yagi T. Label-free photoacoustic imaging of human palmar vessels: a structural morphological analysis. Sci Rep 2018;8:786. [Crossref] [PubMed]
- Sun C, Munn LL. Lattice-Boltzmann simulation of blood flow in digitized vessel networks. Comput Math Appl 2008;55:1594-600. [Crossref] [PubMed]
- Li WW, Carter MJ, Mashiach E, Guthrie SD. Vascular assessment of wound healing: a clinical review. Int Wound J 2017;14:460-9. [Crossref] [PubMed] | <urn:uuid:2a31458d-7d8f-43f1-90a5-15c6ac140696> | {
"dump": "CC-MAIN-2021-39",
"url": "https://qims.amegroups.com/article/view/25851/23975",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057584.91/warc/CC-MAIN-20210924231621-20210925021621-00550.warc.gz",
"language": "en",
"language_score": 0.8945536017417908,
"token_count": 5394,
"score": 2.703125,
"int_score": 3
} |
Introduction to DevOps Architecture
In Software Engineering, Development and Operations both play vital roles in order to deliver applications. The development comprises analyzing the requirements, designing, developing and testing of software components or frameworks. The operation consists of administrative processes, services, and support for the software. When both Development and Operation are combined together to collaborate, DevOps architecture comes into the picture. Moreover, it can be gathered that DevOps architecture is the solution to mend the gap between Development and Operations teams so that the delivery can be faster with fewer issues.
DevOps Architecture and Components
DevOps architecture is used for applications hosted on cloud platforms and large distributed applications. Agile Development is used here so that integration and delivery can be continuous. When the Development and Operations team work separately from each other, it is time-consuming to design, test and deploy. Also if the teams are not in sync with each other, it may cause a delay in delivery. So DevOps enables the teams to amend their shortcomings and increase productivity.
Below are the various DevOps components
Without DevOps, the cost of the consumption of resources was evaluated based on pre-defined individual usage with fixed hardware allocation. But with DevOps, the usage of cloud, sharing of resources comes into the picture and the build is dependent upon the user’s need which is a mechanism to control the usage of resources or capacity.
Many good practices like widely used git enable the code to be used which ensures not only writing the code for business but also helps to track changes, getting notified about the reason behind the change and if necessary reverting to the original code developed. The code can be arranged properly in files and folders etc and they can be reused.
The application will move to production after it is tested. In the case of Manual Testing, it consumes more time in testing and moving the code to production. The testing can be done by automation which decreases the time for testing so that the time to deploy the code to production can be reduced as automating the running of the scripts will remove many manual steps.
DevOps use agile methodology to plan the development. Unplanned work always reduces productivity. With the Development and Operations team in sync, it helps in organizing the work to plan accordingly so as to increase productivity.
Continuous Monitoring is used to identify any risks of failure. It is also helpful in tracking the system accurately so that the health of the application also can be checked. The monitoring becomes easier with services where the log data may get monitored through many third-party tools like Splunk.
Most systems can support the scheduler for automated deployment. A cloud management platform enables users to capture accurate insights and view the optimization scenario, analytics on trends by the deployment of dashboards.
DevOps changes the way the traditional approach of developing and testing separately. The teams operate in a collaborative way where both the teams participate actively throughout the service lifecycle. The operations team interacts with developers and they come up with a monitoring plan which serves the IT and business requirements.
Generally, deployment to an environment can be done by automation. But when the deployment is made to the production environment, it is done through manual triggering. Most of the processes involved in release management commonly specify to do the deployment in the production environment manually to lessen the impact on the customers.
Features of DevOps Architecture
Below are the key features of DevOps Architecture.
Automation most effectively reduces the time consumption specifically during the testing and deployment phase. The productivity increases and releases are made quicker through automation with less issue as tests are executed more rigorously. This will lead to catching bugs sooner so that it can be fixed more easily. For continuous delivery, each code change is done through automated tests, through cloud-based services and builds. This promotes production using automated deploys.
The Development and Operations team collaborates together as DevOps team which improves the cultural model as the teams become more effective with their productivity which strengthens accountability and ownership. The teams share their responsibilities and work closely in sync which in turn makes the deployment to production faster.
Applications need to be integrated with other components in the environment. The integration phase is where the existing code is integrated with new functionality and then testing takes place. Continuous integration and testing enable continuous development. The frequency in the releases and micro-services lead to significant operational challenges. To overcome such challenges, continuous integration and delivery are implemented to deliver in a quicker, safer and reliable manner.
4. Configuration Management
This ensures that the application only interacts with the resources concerned with the environment in which it runs. The configuration files are created where the configuration external to the application is separated from the source code. The configuration file can be written while deployment or they can be loaded at the run time depending on the environment in which it is running.
DevOps architecture enables collaboration between teams which is one of the essential features of delivery. It helps in improving the work culture among the teams to remain in sync to understand the status of work related to other teams. It helps in getting the releases faster and enables the teams to work in a more organized way by planning the work better and getting the work done in a more effective and smarter way. There are many DevOps architecture certifications available from Amazon, Microsoft, and Red Hat. DevOps architecture effectively decreases the deployment time which makes it highly recommended among the organizations.
This is a guide to DevOps architecture. Here we discuss what is DevOps architecture and its Components with features in detail manner. You can also go through our other suggested articles to learn more – | <urn:uuid:3c0d4be4-9e00-4e80-92fb-d576c31ae5d6> | {
"dump": "CC-MAIN-2021-04",
"url": "https://www.educba.com/devops-architecture/",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703529080.43/warc/CC-MAIN-20210122020254-20210122050254-00181.warc.gz",
"language": "en",
"language_score": 0.9410773515701294,
"token_count": 1124,
"score": 2.9375,
"int_score": 3
} |
Nonprofit and For Profit Accounting Differences
Posted on Wednesday, June 04, 2014
In both nonprofit and for profit accounting, the goal is the same – to provide accurate and timely financial information to users for decision making. There are significant differences though between accounting for nonprofit and for profit entities starting with what the users of the financial statements are focused on. For profit stakeholders are focused on profitability and the bottom line while nonprofit stakeholders are more concerned with achieving the organization’s mission and allocation of resources.
While the basic information contained in each type of entity’s financial statements is the same, the terminology used is different. A for profit balance sheet shows assets, liabilities and retained earnings. A nonprofit statement of financial position shows assets, liabilities and net assets. A for profit income statement shows revenues less expenses, which equals net income (or loss). A nonprofit statement of activities shows revenues less expenses, which equals the change in net assets.
Beyond terminology, there are also some key differences in the recording of financial activity that are specific to nonprofits.
Contributions. Similar to for profit entities, nonprofit organizations may receive earned revenue through an exchange transaction in which the other party receives a direct tangible benefit; however, unlike for profit entities, nonprofits also receive contributions (nonreciprocal support). Generally, contributions are recognized in revenue in the period received. An unconditional promise to give a contribution is recorded when the promise is made. A conditional promise to give a contribution is recorded when the condition has been met. Also see Promises versus Intentions.
Restricted Contributions. Nonprofits may receive contributions with donor-imposed restrictions which limit the use of the funds to a specific purpose or time period. Restricted contributions must be recorded by type – permanently restricted or temporarily restricted. Also see Restricted Contributions.
In-kind Contributions. Nonprofit organizations can also receive noncash contributions of goods and services called in-kind that are used in the ordinary course of doing business. In order for the organization to have a true cost of operating the organization, these in-kind contributions need to be recorded by the nonprofit. In-kind contributions are recorded in revenue at fair value as of the date of the gift with an offsetting entry to an expense account, which will result in no effect to the change in net assets. Sometimes in-kind contributions may be recorded to an asset, which increases the change in net assets.
Functional Expenses. Nonprofits are required to report expenses by functional classification – program, management and general, and fundraising. In addition, health and welfare organizations are required to include a statement of functional expenses as part of their financial statements. Functional reporting provides a tool used to determine if the nonprofit is using its resources efficiently. Also see Functional Expenses.
Posted by: Carrie Minnich, CPA
Posted in Mission Minded Nonprofits
Disclaimer: The information contained in Dulin, Ward & DeWald’s blog is provided for general educational purposes only and should not be construed as financial or legal advice on any subject matter. Before taking any action based on this information, we strongly encourage you to consult competent legal, accounting or other professional advice about your specific situation. Questions on blog posts may be submitted to your DWD representative. | <urn:uuid:8708f734-d28b-4067-ac4b-4e8602d2a5ca> | {
"dump": "CC-MAIN-2017-17",
"url": "http://dwdcpa.com/blog/nonprofit-and-for-profit-accounting-differences/",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917125719.13/warc/CC-MAIN-20170423031205-00347-ip-10-145-167-34.ec2.internal.warc.gz",
"language": "en",
"language_score": 0.9439066052436829,
"token_count": 658,
"score": 2.609375,
"int_score": 3
} |
Quite aside from whether or not one subscribes to a particular religious doctrine, the historical fact of religion as a human artifact poses intriguing questions about the whether or not religious belief in a broad sense (whether monotheistic, polytheistic or some other form) conferred evolutionary advantages that enabled humans to survive and thrive.
Facing the seemingly random acts of nature, there would seem to be a case for the evolution of supernatural beliefs as a means of psychological stability: the thought that we could make some kind of sense of nature and attempt an influence of events through a supernatural intermediary. Furthermore, the imposition of “right and wrong,” the idea of morals as defined and enforced by a god or gods might help establish and maintain a social order that might otherwise fall apart if left solely to the efforts of people. Cynics might also consider religion a product of a ruler or ruling class for their own benefit.
A new research effort is aiming to collect a comprehensive and global fact base on the nature of religious belief. The effort is called CERC or the Cultural Evolution of Religion Research Consortium. The effort is a product of HECC, the Centre for Human Evolution, Cognition, and Culture.
HECC is a joint University of British Columbia and Simon Fraser University research hub that connects evolutionary scientists to psychologists, religious studies scholars and others in the humanities and social sciences. The Centre recently received a $3 million grant that will provide the foundation for the international Cultural Evolution of Religion Research Consortium. The CERC website describes the work as a
…six-year project [that] brings together the expertise of over fifty scientists, social scientists and humanities scholars from universities across North America, Europe and East Asia—along with post docs and graduate students—into a research network that will be called the Cultural Evolution of Religion Research Consortium (CERC). Over this six-year project, CERC aims to answer the question of what religion is, how it is linked to morality, and why it plays such a ubiquitous role in human existence.
Tristan Hopper in an article for the National Post wrote:
Seven years ago, social psychologist Ara Norenzayan gathered 125 participants at the University of British Columbia, asked them to solve a word puzzle and then handed them $10 with instructions to share it with a stranger. As expected, some participants kept the whole sum and some split it 50-50 — but the surprising thing was how easily their generosity could be moulded by the subtleties of the word puzzle.
Participants who completed a puzzle peppered with religious words, such as “spirit,” “God” or “prophet,” largely decided to split the cash. Participants with neutral word puzzles, meanwhile, barely shared at all. Even if they did not realize it, the belief in a “supernatural police” officer appeared to be inspiring subconscious outpourings of generosity, Mr. Norenzayan mused to reporters.
For decades, academia has largely ignored religion as irrelevant or at worst, parasitic. But a new — and controversial — theory holds that cities, agriculture and even society as we know it would never have taken hold if humanity had not believed a deity was keeping tabs. And now, with six years, $3-million and a travel schedule that will bring them to the most remote corners of the planet, a team of Vancouver researchers are out to prove once and for all that religion may be humanity’s greatest “cultural technology.”
“There is a view that religion is an ancient superstition that’s going to fall away,” said Edward Slingerland, a professor of Asian studies at the University of British Columbia and the lead of a massive Canadian project billed as world’s largest academic study of religion.
“If our theory is right it’s actually been the cornerstone to civilizations.”
Throughout most of history, the default human religion was tribal: Events were randomly governed by groups of supernatural beings similar to the gods of ancient Greece. Sky gods stole the sun every night, the fertility gods made women pregnant — but the behaviour was haphazard.
Then, about 5,000 years ago, a new and revolutionary type of deity began emerging in the Middle East. For the first time in history, gods cared what humans were doing.
“The innovation is having gods that care about moral values: hard work, not cheating your neighbours, not shirking from battle,” said Mr. Slingerland. “That makes it possible for people to bind themselves into larger units than was possible before.”
Traditionally, scientists have seen agriculture — not religion — as the singular foundation that allowed humans to build cities and draw up complex political systems. The problem is; agriculture was not the immediate boon that might be assumed. Agriculture required massive amounts of cooperation and labour, was riddled with disease, and according to archeologists, left farmers in much worse shape than their hunter-gatherer cousins.
For early humans to have stayed focused on such a seemingly futile project, goes the theory, something deeper must have been holding them together. “As soon as you start needing complex irrigation systems, our hypothesis is that you can’t get very big without religion,” said Mr. Slingerland.
One component of CERC is to pull together historians, anthropologists and archeologists from around the world to assemble a gargantuan digital catalogue of every religious belief held by every culture throughout time.
At the same time, CERC will dispatch teams of psychologists to more than 20 field sites all around the globe to gauge the religious beliefs of people from Northern Ireland to the Central African Republic using psychological tests such as the priming experiment mentioned in the introduction.
When the database of CERC’s findings goes public in 2018, researchers will be able to select any historical period or region of the world and be provided with an itemized list of what the locals believed, how it affected their population size, agricultural prowess and military might — and even how they expressed their collective faith, right down to whether they circumcised their sons or got neck tattoos.
It is akin to a religious version of Oxford University’s 19th century push to document the origins and mutations of every single word of the English language — a project that ultimately yielded the Oxford English Dictionary. Religious data in hand, researchers then expect to draw up sweeping pictures of how different beliefs grew, shrank or destroyed societies.
“We have an idea that certain rituals and beliefs make societies more successful and more likely to expand at the expense of other societies,” said Joseph Henrich, a UBC psychologist and CERC partner. Conversely, researchers also expect to trace a “mellowing out” of religious beliefs as societies modernize.
As courts, police and national beliefs move in to take the place once held by religion, deities are allowed to shift from punishment-minded wardens to the “kinder, gentler” God worshipped by most modern world religions. “The appearance of a loving God that doesn’t do too much punishing is a modern phenomenon,” said Mr. Henrich. Naturally, according to Mr. Slingerland, a project like this is “fundamentally controversial.”
Right off the bat, it enrages the religious by starting on the premise that gods are not real. At the same time, atheists can be equally enraged by the notion that without religion, humanity would still be foraging for berries.
More contentious still, is the inevitable fact that the data will show some religions as being “better” at building prosperous societies than others. A recent Harvard study, for instance, pored over 40 years of data and concluded that a country’s belief in hell provided a measurable boost to its economy.
Spending his teen years in Beirut during the sectarian violence of the Lebanese Civil War, Mr. Norenzayan is well-acquainted with the uglier side of religion. And oddly, it is this ugly side that he credits for academia’s newfound interest in the topic. Among social scientists, the September 11th attacks spawned “a realization that there’s religion in the world and it can turn toxic.”
Some of CERC’s findings may be irksome, but “the answer is not to hide the evidence,” he said. “We don’t do that with anything else, why would we do it with religion?” | <urn:uuid:c4e86a1a-b3d6-4413-bbd6-6f4ab604c7c0> | {
"dump": "CC-MAIN-2021-49",
"url": "https://xray-delta.com/tag/and-culture/",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358560.75/warc/CC-MAIN-20211128134516-20211128164516-00059.warc.gz",
"language": "en",
"language_score": 0.9459226727485657,
"token_count": 1760,
"score": 3.28125,
"int_score": 3
} |
Words By Stephen McCallum, Photo by Ahmad Hakim
Stephen McCallum is a former President of the University of South Australia Student Association and National Environment Officer of the National Union of Students. He is currently studying a Bachelor of Arts here at UniSA, majoring in Indigenous Culture and Australia Society.
Australia was founded on the basis of ‘terra nullius’, which essentially means there were no people here before us. The issue with this is that there were people living in Australia before 1788, and this had been the case for well over 60 000 years prior to 1788.
Under British and International law around the time of 1788, there were three ways to claim land. If there were people living on the land, they could declare war and occupy the land or negotiate a treaty with the current custodians in exchange for the land. If there were no people living in the area, they could simply claim the unclaimed land.
The colonies of mainland Australia did not legitimately claim land, as there were people here before they arrived and they did not declare war or negotiate a treaty. Tasmania is somewhat different in that they eventually negotiated a treaty with Aboriginal Nations, but it must be pointed out that it was under the duress and active threats of genocide, and the treaty was not honoured by the Tasmanian government.
Some people claim that the numerous genocides that took place against Aboriginal peoples in every state of Australia are a form of war, somehow legitimising the Australian government’s claim to the land. Declaring a state of war is important because it allows for the negotiation of peace and that’s something that could distinguish war from genocide. It pays to remember that not every physical confrontation between groups is a war.
It is because of this injustice that we today enjoy using land in our comparatively comfortable lives.
It is this injustice that still negatively impacts Aboriginal peoples in Australia today.
Many Australian’s think of colonisation as some kind of favour for Aboriginal Nations because the British introduced Western culture and technology to make their lives easier, but the reality is much different. Australian government laws prevent Aboriginal peoples from living in the way Aboriginal peoples lived prior to colonisation, but also disadvantage Aboriginal peoples participating in Western society.
Aboriginal peoples currently have a shorter life expectancy than a black person during apartheid in South Africa. Aboriginal infant mortality rates are more than double that of other Australians. Community workers who have volunteered in Sudan regularly describe Aboriginal communities as being in worse conditions, with poorer infrastructure and social services.
I was recently moved by an article about Murrumu Walubara Yidindji who was a well-respected journalist in the Canberra Press Gallery. Murrumu has renounced his Australian identity and now lives under the law of the Yidindji Nation of Northern Queensland. I strongly recommend you read his article published in The Guardian titled ‘The man who renounced Australia’ and engage with often ignored Aboriginal and Australian issues from an Aboriginal perspective. | <urn:uuid:f4d2ccce-5976-476d-b2ea-1ed19b654992> | {
"dump": "CC-MAIN-2021-04",
"url": "https://versemag.com.au/magazine/want-justice/",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703514423.60/warc/CC-MAIN-20210118061434-20210118091434-00032.warc.gz",
"language": "en",
"language_score": 0.976777970790863,
"token_count": 612,
"score": 3.359375,
"int_score": 3
} |
Hyperbolic geometry, conceived by mathematician Carl Gauss in 1816, is stranger still. Like planar geometry, it posits that the shortest distance between two points is a straight line. And hyperbolic space, like spherical space, has a constant curvature—except the curvature is negative rather than positive. Hyperbolic geometry describes a world that is curving away from itself at every point, making it the precise opposite of a sphere, whatever that might look like. (One is tempted to picture an inside-out sphere, but that still describes a positive curvature, since space is curving toward itself at each point.)
Gauss never published the idea, perhaps because he found it inelegant. In 1825 the Hungarian mathematician János Bolyai and the Russian mathematician Nicolay Lobachevsky independently rediscovered hyperbolic geometry. They declared that all the normal rules of euclidean geometry would apply to this geometry except for Euclid's parallel postulate, which states that if you have a straight line and a point not on that line, there exists at most one straight line that passes through the point and is parallel to the line. In hyperbolic space, more than one parallel line runs through that external point; in fact, an infinite number of them do.
The rediscovery of hyperbolic space was not greeted enthusiastically by the analytically oriented German and Austrian mathematicians who dominated mathematics in the West; they dreamed of a logical, orderly universe that could be represented through equations. Not until very recently—after the fall of the iron curtain—did the strange and illogical beauty of hyperbolic forms emerge yet again to claim the attention of mathematicians.
I ask Henderson how it is that shapes that cannot be imagined nonetheless can be found in his wife's knitting bowl. "A hundred years ago, the mathematician David Hilbert proved a theorem that it is impossible to represent the hyperbolic plane in three-dimensional space analytically," he says. " 'Analytically' means 'with equations.' Everybody left off the word analytically later on. They were worried that mistakes or errors would creep into mathematics through geometric intuition, and so they discouraged the study of geometry and everything associated with this weird kind of thinking."
The prejudice against a mathematics that could not be expressed strictly by equations did not exist when Taimina grew up in Latvia under Soviet-style math schooling. "We were taught to start with the picture," she recalls. "You figure out what is happening, and then you set out to prove it."
Because the Soviet system also encouraged shortages and the production of shoddy, unappealing goods, every woman learned how to knit and crochet. "You fix your own car, you fix your own faucet—anything," she says with an easy laugh. "When I was growing up, knitting or any other handiwork meant you could make a dress or a sweater different from everybody else's."
The first person to solve the problem of how to construct a simple physical model of the hyperbolic plane for classroom use was mathematician William Thurston, now a colleague of Taimina and Henderson's at Cornell. Unlike most of his American colleagues, Thurston never put much stock in the attempt to represent geometric intuition with mathematical equations.
Henderson's method of constructing a hyperbolic plane involved taping together thin, circular strips of paper. He learned the method from Thurston at a workshop at Bates College in 1978. Afterward, on a camping trip, he constructed his first hyperbolic plane using his Swiss army knife and some Scotch tape.
Twenty years later, Taimina remembers, Henderson was still using the same tattered model. When she was assigned to teach his class on hyperbolic geometry at Cornell, where she had an appointment as a visiting professor, she was forced to confront it.
"It was disgusting," Taimina recalls with a playful shake of her head. "So I spent the summer crocheting a classroom set of hyperbolic forms. We were sitting at the swimming pool with David's family, my girls were learning to speak English and swimming, and I was sitting and crocheting. People walked by, and they asked me, 'What are you doing?' And I answered, 'Oh, I'm crocheting the hyperbolic plane.' "
She begins by crocheting a short row of stitches. Onto that row she adds successive, concentric rows of stitches. The rows, or rings, increase exponentially in length: one additional stitch in every two loops of the previous rows, say, or two stitches in every five. As the number of stitches per row increases, the resulting form becomes wavy and scrunched. Precisely because hyperbolic space expands exponentially, Taimina explains, it requires crocheting rather than knitting. "In knitting, all the stitches you are working with, you have on your needles," she says, adding some stitches to a shape she is completing. "So given the rate of increase, very quickly you cannot move your needles." Crocheting doesn't require all the stitches to be held on the needles simultaneously, enabling Taimina to pack more stitches into a smaller space. Crocheted forms are also stiff enough to hold their shape.
"This is a very interesting one," she says, drawing my attention to a purple flower of yarn that looks like a sea anemone caught on her crocheting needle. The inner row of stitches is an inch and a half long. There appear to be 16 concentric rows in all. Taimina asks me to estimate the circumference of the crenulated purple sea anemone. I guess 24 feet.
"That's close enough," Henderson says. "It's 30 feet—369 inches."
Taimina corrects him: There are 22 rows between the first and last row, not 16. The rate of increase from one row to the next is amazing, and the resulting forms are unusually beautiful. What Taimina's crocheting magically reveals is that hyperbolic geometry is actually part of our everyday universe. Video-game designers can use hyperbolic geometry to create lifelike clothing and hair. Some neurologists even believe that the brain stores information according to the rules of hyperbolic geometry. Although physical applications for hyperbolic geometry are less than two decades old, they represent a profound shift in mathematical thinking, away from the dream of a perfect analytic universe toward a more open and intuitive one.
When I ask Taimina for examples of hyperbolic forms in her own life, she points out the window to the backyard. "It's too dark now," she says. "I can show you tomorrow in the garden. Crinkled parsley. Some lettuce. Wood ear mushrooms. They are all hyperbolic forms."
I dimly remember that astronomers have proposed that the universe may be hyperbolic, displaying a constant negative curvature. Henderson nods and says: "There is evidence now that if you go off in certain directions, you'll come back. Which could be spherical or hyperbolic."
That leads me back to the very intuitive idea that I have been harboring all evening: What better shape for the universe than a pair of hyperbolic pants?
"It's unlikely," Taimina says. To console me, she agrees to model the hyperbolic skirt that she made for a recent talk sponsored by the Institute for Figuring in Los Angeles, after which the film director Werner Herzog took her to dinner and then kissed her good night. The skirt is made of 10 skeins of cotton yarn, each of which is 689 feet long. "The ruffles divide into other ruffles," Henderson says, as Taimina spins around the living room. "That's how you can tell it's hyperbolic." When I suggest a clothing line for mathematicians, Taimina smiles.
"This is one of the hottest silhouettes this season," she says. "It's actually similar to a very old pattern called the godet skirt. It's made with six or eight panels, and it's known to flatter any figure. So you see why hyperbolic geometry is truly important." | <urn:uuid:edd4f9e9-1d0d-4fe1-bdd0-ef42d41bc2ae> | {
"dump": "CC-MAIN-2015-22",
"url": "http://discovermagazine.com/2006/mar/knit-theory",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207928019.31/warc/CC-MAIN-20150521113208-00313-ip-10-180-206-219.ec2.internal.warc.gz",
"language": "en",
"language_score": 0.9688172936439514,
"token_count": 1702,
"score": 3.671875,
"int_score": 4
} |
I am thinking about making an introductory book to some different "languages", for self learning. But I realize I'm blending the writing system with the pronunciation system, and am starting to get confused. To help ease the confusion, I am wondering if technically it's possible to write any language down using any script (so for example, write English using Hebrew Script, Write Sanskrit with Arabic Script, I guess you can't write English with Chinese Script so there's a counter example I just realized, but still I would like to ask to learn more).
Basically I started with the idea for a "Simplified Hebrew Grammar". I was going to start by taking the letters (symbols/orthography) and writing the pronunciations. But then after a few hours of trying that I realized the Hebrew letters sometimes have multiple pronunciations depending on context. Then I think about English, which uses the Latin script, and you pretty much can't say what the pronunciation of a single letter is without resorting to it's surrounding context in a word or something. The letter "a" isn't "ah", it's "cat", "father", etc.
So then I thought about well what if you had books on (1) Scripts and second books on (2) Pronunciation, or writing using a particular script.
But say you had a book on the Latin orthography. Other than how to actually write the letters (imagine kindergarten templates/guides), it doesn't seem there is much to say about them. They represent sounds all over the place, depending on the natural language being spoken, the dialect, etc. What else can be said of a writing system other than just how to literally do the calligraphy?
Anyways, so then it seems like "we're back to a book combining both orthography and pronunciation" again... Like an "English" book, or a "Spanish" book, not a "Latin Orthography" book.
But then I think of writing systems like Devanagari, which seems much more robust and refined. Each letter/shape has a specific sound, which is modified only according to specific rules (for the most part?). In this case, you could write a book just on "Devanagari" and mapping each letter to pronunciations (a short kindergarten book). So this is why I started to ask this question here. In Devanagari (or Sinhala, or other "Southeast Asian" languages), you can write most other languages it seems to me.
My question is, which writing systems can be used to write other languages? Can these writing systems write all other languages or only some? And which writing systems can't write other languages (like Chinese)?
As a tangent, then I'm imagining if there is ever a language where a letter such as
b is pronounced
/b/ in one context, and
/h/ in another, and
/t/ in another, just to make things even more complicated. But just a tangent lol. | <urn:uuid:3b054416-2a75-4218-bb0c-3f0b3a6fff10> | {
"dump": "CC-MAIN-2021-21",
"url": "https://linguistics.stackexchange.com/questions/36334/can-all-scripts-be-used-to-write-all-different-languages",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988858.72/warc/CC-MAIN-20210508091446-20210508121446-00304.warc.gz",
"language": "en",
"language_score": 0.9576426148414612,
"token_count": 628,
"score": 2.78125,
"int_score": 3
} |
The Sacramento Valley is a globally important resting and refueling stop for birds migrating along the Pacific Flyway. The valley provides habitat for more than 400,000 birds making their way from Alaska to Argentina and back.
A new study shows the amount of flooded habitat available during peak migration for the birds has decreased every year for the last 30 years.
“We estimate on average that we’ve lost an area of about four times the size of Central Park in each year,” says Danica Schaffer-Smith, a doctoral student with Duke University who conducted the study.
After peak migration, the study also found that the amount of water on the landscape increases five-fold.
Read more and listen to story here: Capital Public Radio, April 4, 2017 | <urn:uuid:fe89dec4-c8d1-48ec-ac81-91628d78b703> | {
"dump": "CC-MAIN-2019-09",
"url": "http://www.camigratorybirds.org/?p=338",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247481111.41/warc/CC-MAIN-20190216190407-20190216212407-00353.warc.gz",
"language": "en",
"language_score": 0.9424504041671753,
"token_count": 157,
"score": 3.390625,
"int_score": 3
} |
The US state of New York on Thursday released a new State Energy Plan, a “comprehensive roadmap” to help it achieve the most ambitious GHG reduction targets in the country.
Some highlights of the plan:
- Requires New York to reduce GHGs across all sectors to 40% below 1990 levels by 2030 on the way to meeting an 80% cut by 2050. California’s governor last month called for the same target for his state, for which lawmakers are on course to approve it by October.
- Mandates state electricity providers to produce half of their generation from renewable sources by 2030 – including solar, wind, hydroelectric, and biomass. The state currently has a 30% renewable energy standard.
- Aims to decrease energy consumption in buildings to 23% below 2012 levels, the equivalent of 600 trillion Btu.
- Makes no specific provision for using a cap-and-trade mechanism to reduce economy-wide emissions, but does note that “further adjustments” could have to be made to the RGGI program to help the state’s power sector meet the new objectives as well as those set out in the EPA’s Clean Power Plan slated to be finalized later this summer.
- New York is the fourth largest US state by population (19.7 million) and third by GDP ($1.35 trillion) behind California and Texas.
- The state emitted 204.6 mt of CO2 in 1990, requiring a reduction to 122.76 mt under the new plan. Emissions are forecast to top out at 177.19 mt this year and 174.69 mt in 2030 under BAU, according to figures from the New York State Research and Development Authority.
- Emissions from New York’s power sector are projected to rise under the plan due to increased electrification of the transportation sector. For that reason, the state is urging the EPA not to set New York CPP targets at a level that inhibits the state’s ability to meet economy-wide emissions targets.
By Robert Mullin – [email protected] | <urn:uuid:22e97241-0086-421c-a591-662bc50ba509> | {
"dump": "CC-MAIN-2022-05",
"url": "https://carbon-pulse.com/5506/",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320301592.29/warc/CC-MAIN-20220119215632-20220120005632-00294.warc.gz",
"language": "en",
"language_score": 0.9280886054039001,
"token_count": 438,
"score": 2.71875,
"int_score": 3
} |
Federal Insecticide, Fungicide, and Rodenticide Act
|Long title||Federal Insecticide, Fungicide, and Rodenticide Act of 1947 to regulate the marketing of economic poisons and devices, and for other purposes.|
|Enacted by the||80th United States Congress|
|Effective||June 25, 1947|
|Public Law||P.L. 80-104|
|Stat.||61 Stat. 163|
|U.S.C. sections created||7 U.S.C. § 136 et seq.|
|P.L. 80-104, P.L. 88-305, P.L. 92-516, P.L. 94-140, P.L. 95-396, P.L. 96-539, P.L. 100-532, P.L. 101-624, P.L. 102-237, P.L. 104-170, P.L. 108-199, P.L. 110-94|
The Federal Insecticide, Fungicide, and Rodenticide Act (FIFRA) is a United States federal law that set up the basic U.S. system of pesticide regulation to protect applicators, consumers, and the environment. It is administered and regulated by the United States Environmental Protection Agency (EPA) and the appropriate environmental agencies of the respective states. FIFRA has undergone several important amendments since its inception. A significant revision in 1972 by the Federal Environmental Pesticide Control Act (FEPCA) and several others have expanded EPA’s present authority to oversee the sales and use of pesticides with emphasis on the preservation of human health and protection of the environment by "(1) strengthening the registration process by shifting the burden of proof to the chemical manufacturer, (2) enforcing compliance against banned and unregistered products, and (3) promulgating the regulatory framework missing from the original law".
- 1 History
- 2 Amendments and revisions
- 3 Major code sections
- 4 Regulations
- 5 Registration of pesticide products
- 6 Regulated non-pesticidal products not requiring registration
- 7 Enforcement
- 8 Special review
- 9 Pesticides and endangered species
- 10 Conflicts with other laws and acts
- 11 See also
- 12 References
- 13 External links
The Federal Insecticide Act (FIA) of 1910 was the first pesticide legislation enacted. This legislation ensured quality pesticides by protecting farmers and consumers from fraudulent and/or unadulterated products by manufacturers and distributors. During World War II there was a marked increase in the pesticide market, as wartime research and development produced many chemicals with newly discovered insecticidal properties. Widespread usage of pesticides garnered much public and political support due to the resulting post war food surplus made possible by higher crop yield from significantly lower pest damage. Synthetic organic insecticide usage increased from 100 million pounds in 1945 to over 300 million pounds by 1950. The Federal Insecticide Act of 1910 set standards for chemical quality and provided consumers protection but did not address the growing issue of potential environmental damage and biological health risks associated with such widespread use of insecticides. Congress passed the Federal Insecticide, Fungicide, and Rodenticide Act in 1947 to address some of the shortcomings of the Federal Insecticide Act.
Amendments and revisions
|Year||Act||Public Law Number|
|1947||Federal Insecticide, Fungicide, and Rodenticide Act||P.L. 80-104|
|1964||Federal Insecticide, Fungicide, and Rodenticide Act Amendments||P.L. 88-305|
|1972||Federal Environmental Pesticide Control Act||P.L. 92-516|
|1975||Federal Insecticide, Fungicide, and Rodenticide Act Extension||P.L. 94-140|
|1978||Federal Pesticide Act of 1978||P.L. 95-396|
|1980||Federal Insecticide, Fungicide and Rodenticide Act Amendments||P.L. 96-539|
|1988||Federal Insecticide, Fungicide, and Rodenticide Amendments of 1988||P.L. 100-532|
|1990||Food, Agriculture, Conservation, and Trade Act of 1990||P.L. 101-624|
|1991||Food, Agriculture, Conservation and Trade Amendments of 1991||P.L. 102-237|
|1996||Food Quality Protection Act (FQPA) of 1996||P.L. 104-170|
|2004||Pesticide Registration Improvement Act of 2003||P.L. 108-199|
|2007||Pesticide Registration Improvement Renewal Act||P.L. 110-94|
The current version of FIFRA underwent a major revision in 1972 and superseded the Federal Insecticide Act of 1910 and the Federal Insecticide, Fungicide, and Rodenticide Act of 1947. When FIFRA was first passed in 1947, it gave the United States Department of Agriculture responsibility for regulating pesticides. In 1972, when FIFRA underwent a major revision, it transferred responsibility of pesticide regulation to the Environmental Protection Agency and shifted emphasis to protection of the environment and public health. In 1988, it was amended to change pesticide registration laws and to require reregistration of many pesticides that had been registered before 1984. The act was amended again in 1996 by the Food Quality Protection Act. More recently the act was amended in 2012 by the Pesticide Registration Improvement Extension Act of 2012.
As of May 2007, there are 28 listed restricted pesticides of different formulas and mixtures. Any area these pesticides are used or applied is considered a restricted area.
Major code sections
|7 U.S.C.||Section Title||FIFRA|
|Short title and table of contents||Section 1|
|136a||Registration of pesticides||Section 3|
|136a-1||Reregistration of registered pesticides||Section 4|
|136c||Experimental use permits||Section 5|
|136d||Administration review; suspension||Section 6|
|136e||Registration of establishments||Section 7|
|136f||Books and records||Section 8|
|136g||Inspection of establishments||Section 9|
|136h||Protection of trade secrets and other information||Section 10|
|136i||Restricted use pesticides; applicators||Section 11|
|136j||Unlawful acts||Section 12|
|136k||Stop sale, use, removal, and seizure||Section 13|
|136n||Administrative procedure; judicial review||Section 16|
|136o||Exemption of federal and state agencies||Section 17|
|136p||Exemption of federal and state agencies||Section 18|
|136q||Storage, disposal, transportation, and recall||Section 19|
|136r||Research and monitoring||Section 20|
|136s||Solicitation of comments; notice of public hearings||Section 21|
|136t||Delegation and cooperation||Section 22|
|136u||State cooperation, aid, and training||Section 23|
|136v||Authority of states||Section 24|
|136w||Authority of Administrator||Section 25|
|136w-1||State primary enforcement responsibility||Section 26|
|136w-2||Failure by the state to assure enforcement of state pesticides use regulations||Section 27|
|136w-3||Identification of pests; cooperation with Department of Agriculture’s program||Section 28|
|136w-4||Annual report||Section 29|
|136w-5||Minimum requirements for training of maintenance applicators and service technicians||Section 30|
|136w-6||Environmental Protection Agency minor use program||Section 31|
|136w-7||Department of Agriculture minor use program||Section 32|
|136w-8||Pesticide Registration Service Fees||Section 33|
|136y||Authorization of Appropriations||Section 35|
Note: This table shows only the major code sections. For more detail and to determine when a section was added, the reader should consult the official printed version of the U.S. Code.
In order to be considered for use, pesticides had to undergo 120 tests with regards to safety and its actual effectiveness. Because of these rigorous test, only 1 in 139,000 actually make it through to be used in agriculture.
FIFRA established a set of pesticide regulations:
- FIFRA established registration for all pesticides, which is only done after a period of data collection to determine the effectiveness for its intended use, appropriate dosage, and hazards of the particular material. When registered, a label is created to instruct the final user the proper usage of the material. If instructions are ignored, users are liable for any negative consequences.
Label directions are designed to maximize the effectiveness of the product, while protecting the applicator, consumers, and the environment. Critics of the process point out on the one hand that the research to produce the label is entirely done by the manufacturer and not much checking is done on its accuracy. On the other hand some consider the process too strict. It costs millions of dollars and often several years to register a pesticide, which limits production only to large players. Likewise many smaller or specialty uses are never registered, because the companies do not consider the potential sales sufficient to justify the investment.
- Only a few pesticides are made available to the general public. Most pesticides are considered too hazardous for general use, and are restricted to certified applicators. FIFRA established a system of examination and certification both at the private level and at the commercial level for applicators who wish to purchase and use restricted use pesticides. The distribution of restricted pesticides is also monitored.
- The EPA has different review processes for three categories of pesticides: antimicrobials, biopesticides, and conventional pesticides. The three categories have a similar application process, but have different data requirements and review policies. Depending on the category of pesticide, the review process can take several years. After a pesticide is registered with the EPA, there may be state registration requirements to consider.
- In addition to the rules and regulations given by the EPA, the states may also offer an additional set of rules and registration requirements for a registered pesticide. They can also request annual usage reports from the pesticide users.
In addition to the FIFRA, the Pesticide Registration Improvement Act of 2003 amended the authorized fees for certain products, assessed the process of collecting maintenance fees, and decided on a review process for approving the pesticides. The Pesticide Registration Improvement Act of 2007 renewed these changes to stay in place until 2012. The purpose of the PRIA is to ensure a smooth implementation of pesticide rules and regulations to its users.
Import and export
Pesticides intended for import into the U.S. require a complete Notice of Arrival (NOA) through U.S. Customs and Border Protection. If this NOA is not complete the product would not make it through customs. The NOA lists the identity of the product, the amount within the package, the date of arrival, and where it can be inspected. There are also other rules listed below:
- It must comply with standards set with the U.S. pesticide law
- The pesticide has to be registered with the EPA, except if it's on the exemption list
- It cannot be adulterated or violative
- There must be proper labeling
- The product must have been produced in an EPA registered establishment that files annually
Pesticides intended for export to other parts of the world do not have a registration requirement under certain conditions. The conditions are as follows:
- The foreign purchaser has to submit a statement to the EPA stating it knows the product is not registered and can't be sold on U.S. soil.
- The pesticide must contain a label that "Not Registered for Use in the United States"
- The label requirements must be met and the label must contain the English language and the language of the receiving country(ies).
- The pesticide must comply with all FIFRA establishment registration and reporting requirements
- It must comply with FIFRA record keeping requirements
- Note: An EPA registered establishment is one that produces pesticides, the active ingredients in pesticides, and devices for pesticide use and reports initial and annual production.
Registration of pesticide products
Before a company can register its pesticide products with the EPA, it must know what the EPA considers a pesticide under the law. According to section 2(u) of FIFRA, 7 U.S.C. section 136(u), the term “pesticide” is defined as the following:
- any substance or mixture of substances intended for preventing, destroying, repelling, or mitigating any pest,
- any substance or mixture of substances intended for use as a plant regulator, defoliant, or desiccant, and
- any nitrogen stabilizer, except that the term “pesticide” shall not include any article that is a “new animal drug” within the meaning of section 321(w) of title 21, that has been determined by the Secretary of Health and Human Services not to be a new animal drug by a regulation establishing conditions of use for the article, or that is an animal feed within the meaning of section 321(x) of title 21 bearing or containing a new animal drug. The term “pesticide” does not include liquid chemical sterilant products (including any sterilant or subordinate disinfectant claims on such products) for use on a critical or semi-critical device, as defined in section 321 of title 21. For purposes of the preceding sentence, the term “critical device” includes any device which is introduced directly into the human body, either into or in contact with the bloodstream or normally sterile areas of the body and the term “semi-critical device” includes any device which contacts intact mucous membranes but which does not ordinarily penetrate the blood barrier or otherwise enter normally sterile areas of the body.
An applicant will have to prove that the pesticide active ingredient, pesticide product, or proposed new use of a registered pesticide will not cause unreasonable adverse effects on human health and environment. An unreasonable adverse effect is "(1) any risk that is unreasonable to man or the environment that takes social, economic, and environmental costs as well as benefits into consideration and (2) any dietary risk that could be the result of a pesticide used with any food lacking consistency with the standards listed under Section 408 of the Federal Food, Drug, and Cosmetic Act"(FDCA). The applicant must provide scientific data from any combinations of over 100 different tests conducted under EPA guidelines to assess these potential adverse short term and long term effects.
Under Section 408 of the Federal Food, Drug, and Cosmetic Act (FFDCA), the EPA can also regulate the amount of pesticide residues permissible on or in food/feed items, by establishing a “safe” level meaning there is "a reasonable certainty of no harm" from the exposure to the residue whether directly from the consumption of such food or from other non-occupational sources. For food crops, the EPA is required to establish a “tolerance” level, the maximum “safe” level of pesticide present on or in the particular food/feed commodity. The EPA may also choose to provide an exemption to the requirement of an established tolerance level, allowing any amount of a pesticide residue to remain on or in food or feed as long as the exemption meets FFDCA safety standards. Successfully registered pesticides must conform to approved uses and conditions of use, which the registrant must state on the label.
Reregistration of pesticides
A majority of older registered were required to be reregistered under guidelines set by Amendments in 1972, 1988, and 1996 (Table1) in order to meet current health and safety standards, labeling requirements, and for risk regulation and moderation. The Food Quality Protection Act (FQPA), amended FIFRA to require all older pesticides to cause no harm to infants, children, and sensitive individuals within ” reasonable certainty”. Through the reregistration program, older pesticides are eligible for reregistration if they have a complete database and not cause unreasonable health and environmental risks if used as directed in accordance to their labels. FQPA also requires the EPA to review pesticides on a 15-year cycle to ensure all pesticides meet contemporary safety and regulatory standards.
Initial and final fees for reregistration of food or feed use active ingredients are $50,000 and $100,000-$150,000, respectively. Reregistration fees for non-food use pesticides are $50,000-$100,000. Annual maintenance fees are also imposed: $425 per product up to fifty products and a maximum of $20,000 per company. For each product over fifty, the fee is $100, for a maximum fee of $35,000. Fees may be reduced or waived for small business registrants, public health pesticides, or minor use pesticides at the EPA’s discretion, and failure to pay reregistration fees or maintenance fees may result in cancellation of a product registration.
Regulated non-pesticidal products not requiring registration
Adjuvants are chemicals added to enhance the performance/efficacy and/ or alter the physical properties of the pesticidal agents. More than 200 EPA registered pesticides recommends specific addition of one or more adjuvants into the pesticidal mixture to improve overall efficacy. Recognized as “other ingredients”, the EPA also establishes tolerance levels for adjuvants, but they are not required to be registered. Examples of adjuvants include:
- acidifying agents,
- buffering agents,
- anti-foam agents,
- defoaming agents,
- dyes and brighteners,
- compatibility agents,
- crop oil concentrates,
- oil surfactants,
- deposition agents,
- drift reduction agents,
- foam markers,
- feeding stimulants,
- herbicide safeners,
- spreaders, extenders,
- adhesive agents,
- suspension agents,
- gelling agents,
- wetting agents,
- dispersing agents,
- tank and equipment cleaners,
- water absorbents, and
- water softeners.
Devices and instruments used to trap or kill pests or plant life, but not including equipment used to apply pesticides when sold separately, are also regulated but not required to be registered. Pesticide “intermediates” used in the synthesis or manufacture of the pesticide products may be regulated but are also not required to be registered with FIFRA. However, these pesticide intermediates may be regulated by the Toxic Substances Control Act of 1976.
Under FIFRA no individual may sell, use, nor distribute a pesticide not registered with the United States Environmental Protection Agency (EPA). A few exceptions allow a pesticide to be exempt from registration requirements. There must be a label on each pesticide describing, in detail, instructions for safe use. Under the act, the EPA must identify each pesticide as "general use", "restricted use", or both. "General use" labeled pesticides are available to anyone in the general public. Those labeled as "restricted use" require specific credentials and certifications through the EPA (certified applicator).
Although FIFRA is generally enforced by the EPA, Sections 23, 24, 26 and 27 extend primary enforcement authorities to the states. However, EPA authority always supersedes state authority, and primary state authority can be rescinded if the state fails to assure safe enforcement of pesticides usage. Section 9 authorizes inspection of pesticides in storage for sale or distribution. Under Section 13, EPA may issue a Stop Sale, Use or Removal Order (SSURO) to prevent the sale or distribution of violative pesticides and for the authority to seize these pesticides. Section 15 provides indemnity payments for suspended or cancelled registrations. Section 16 allows for a judicial review process for individuals or entities affected by an EPA order or action.
- Distributing, selling, or delivering any unregistered pesticide.
- Making any advertising claim about a pesticide not included in the registration statement.
- Selling any registered pesticide if its content does not conform to label data.
- Falsification of any test-related information or the submission of any false data to support registration.
- Selling an adulterated or misbranded pesticide.
- Detaching, altering, defacing, or destroying any part of a container or label.
- Refusing to keep records or permit authorized EPA inspections.
- Making a guarantee other than that specified by the label.
- Advertising a restricted-use pesticide without giving the product classification.
- Making a restricted-use pesticide available to a non-certified applicator (except as provided by law).
- Using a pesticide in any manner not consistent with the label.
When determining civil penalties, the EPA would take into consideration the severity of infraction, effects of penalties, and size of business. Under Section 14 (a)(1), commercial applicators, wholesalers, dealers, and retailers “may be assessed a civil penalty…of not more than $5,000 for each offense”. Private applicators would be given a warning for the first offense, and a fine up to $1000 may be assessed for each subsequent violation.
Violative acts are charged as misdemeanors and subject to fines and/or imprisonment. A private applicator is subject to $1000 and/or 30 days imprisonment. A commercial applicator is subject to $25,000 and/or up to one year imprisonment. A manufacturer or producer is subject to $50,000 and/or up to one year imprisonment.
FIFRA requires the EPA to continually monitor and update registered pesticides for any new information concerning their safety. Registrants are required to promptly report any new evidence of adverse side effects and to continually conduct studies to aid in risk assessments. If new information indicates adverse side effects, then EPA may conduct a special review to assess the risks and benefit of continued use of the suspect pesticide. With the completion of a special review, EPA may choose to amend or cancel the registration.
Pesticides and endangered species
The Endangered Species Act protects and promotes animal and plant recovery of ones in danger of extinction due to human activity. Under this act the EPA must also consider the dangers of animals and plants when registering a new pesticide. The pesticide must not harm the listed endangered and threatened animals and habitats. To be sure this program is implemented, some labels will direct users of the pesticides to bulletins with specific information regarding use. The protection program has 2 main goals: (1) provide the best protection of endangered species from pesticides and (2) minimize the impact of the program on pesticide users.
To protect the endangered species with the EPA program, the following was implemented:
- sound science is used to asses risk to the listed species
- there is an attempt at finding means to avoid concerns of listed species
- When concerns of the listed species aren't avoidable, they consult with the Fish and Wildlife Services scientist
- Implement usage limitations when the Fish and Wildlife service express a potential adverse effect on a particular species based on a biological opinion
In order to implement the usage limitations mentioned above, the EPA will:
- add a generic label to the pesticide
- develop bulletins containing habitat location and pesticide use limitations
- distributing the bulletins containing this information to pesticide users
- providing a toll-free number for users to contact regarding information in bulletins and how to obtain one
Conflicts with other laws and acts
In the instance that another act or law conflicts with the FIFRA, much consideration is taken. The main conflict with the FIFRA is the Clean Water Act. The biggest controversy is if pesticides were to touch U.S. waters, which would govern how it is handled. Pesticides regulated under FIFRA do not require regulation under the CWA. The EPA also never required CWA to get permission for the FIFRA approved pesticides.
Pesticides used for irrigation and weed control may not be necessarily controlled by the FIFRA and causes conflict with the CWA on who should govern because of it being in water.
- U.S Environmental Protection Agency. "Federal Insecticide, Fungicide, and Rodenticide Act (FIFRA)". Retrieved 10 March 2012.
- Schierow, Linda-Jo (1). "Pesticide Law: A Summary of the Statutes". p. 6. Retrieved 8 March 2012.
- Finegan, Pamela (1). "FIFRA Lite: A Regulatory Solution or Part of the Pesticide Problem?". Pace Environmental Law Review 6 (2): 623. Retrieved 12 March 2012.
- Toth, Stephen. "Federal Pesticide Laws and Regulations". Retrieved 12 March 2012.
- Schierow, Linda-Jo. "Federal Insecticide, Fungicide, and Rodenticide Act". Retrieved 12 March 2012.
- Willson, Harold R (February 23, 1996), Pesticide Regulations. University of Minnesota. Retrieved on 2007-10-15.
- Pub. Law. No. 112-177, 112th Cong., 2d Sess. (Sept. 28, 2012).
- "TITLE 7 - AGRICULTURE CHAPTER 6—INSECTICIDES AND ENVIRONMENTAL PESTICIDE CONTROL". Cornell University Law School - Legal Information Institute. Retrieved 20 April 2012.
- "7 USC § 136 - Definitions". 7 USC Chapter 6 - INSECTICIDES AND ENVIRONMENTAL PESTICIDE CONTROL. Cornell University Law School - Legal Information Institute. Retrieved 31 March 2012.
- "Chapter 1 - Overview of Requirements for Pesticide Registration and Registrant Obligations". Chapter 1 - Overview of Requirements for Pesticide Registration and Registrant Obligations. Retrieved 1 April 2012.
- Utah Department of Agriculture and Food. "Federal Pesticide Laws".
- Hock, Winand K. "Horticultural Spray Adjuvants". Publications Distribution Center, The Pennsylvania State University. Retrieved 1 April 2012.
- EPA summary of FIFRA laws concerning pesticides
- EPA summary on regulating pesticides
- EPA Pesticide Product Label System
- Copies of the labels of most pesticides registered in the USA can be obtained at Crop Data Management Systems, Inc.
- Environmental Resource Center for Higher Education
- Pesticide Use and Water Quality: Are the Laws Complementary or in Conflict | <urn:uuid:b60f7e35-42cd-4f4d-a7bd-abd467cd95a7> | {
"dump": "CC-MAIN-2013-48",
"url": "http://en.wikipedia.org/wiki/Federal_Insecticide,_Fungicide,_and_Rodenticide_Act",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386164641332/warc/CC-MAIN-20131204134401-00064-ip-10-33-133-15.ec2.internal.warc.gz",
"language": "en",
"language_score": 0.8778443336486816,
"token_count": 5528,
"score": 2.828125,
"int_score": 3
} |
Definition of Anthimeria
Anthimeria has originated from the Greek word anti-meros, which means “one part for another.” It is a rhetorical device that uses a word in a new grammatical shape, often as a noun or a verb. Simply, it replaces one part of speech with another.
For instance, Shakespeare converts a noun “peace” into verb in this line: “The thunder would not peace at my bidding” (King Lear). Using nouns as verbs has become such a common practice that now many nouns are often used as verbs. In grammar studies, anthimeria has another name, “functional shift,” or “conversion.” In fact, language is always fluid, and is in constant transformation. Therefore, use of a verb as a noun or vice versa is not a surprise for linguists.
Use of Anthimeria in Songs
Example #1: These Boots Are Made for Walking (by Nancy Sinatra)
“Yeah, you keep lyin’ when you oughta be truthin’
And you keep losing when you oughta not bet
You keep samin’ when you oughta be a changin’
Now, what’s right is right but you ain’t been right yet.”
This song by Nancy Sinatra shows two nouns used as verbs, which are “truthing” and “saming.”
Types of Anthimeria
Depending upon its usage, anthimeria has two types:
This type may be trendy or popular; however, it does not make its appearance permanent in language. For instance, these days a temporary anthimeria is “hashtagging;” since it has emerged recently, but it may not last long.
This type has become a permanent part of language after its emergence. For instance, “texting” has become a permanent part of language. Another one is “typing.”
Examples of Anthimeria in Literature
Example #1: Under the Greenwood Tree (by Thomas Hardy)
“The parishioners about here,” continued Mrs. Day, not looking at any living being, but snatching up the brown delf tea-things, “are the laziest, gossipest, poachest, jailest set of any ever I came among. And they’ll talk about my teapot and tea-things next, I suppose!”
Hardy was popular for his creativity, inventiveness, and coining completely weird and new words such as, “gossipest,” “poaches,” and “jailest” in this excerpt taken from Under the Greenwood Tree.
Example #2: Letter to F. Scott Fitzgerald (by Thomas Wolfe)
“Flaubert me no Flauberts. Bovary me no Bovarys. Zola me no Zolas. And exuberance me no exuberances. Leave this stuff for those who huckster in it and give me, I pray you, the benefits of your fine intelligence and your high creative faculties, all of which I so genuinely and profoundly admire.”
In these lines, the names of the writers are changed into plural forms, which we have never seen before. This is another good example of anthimeria.
Example #3: In the Marvelous Dimension (by Kate Daniels)
“Until then, I’d never liked
petunias, their heavy stems,
the peculiar spittooning sound
of their name. Now I loved
a petunia for all it was worth
—a purplish blue bloom
waving in a red clay pot outside
an office window.”
In this poem, Kate has changed the noun “spittoon” into a verb “spittooning,” and changed the color purple into an adjective.
Example #4: More Die of Heartbreak (by Saul Bellow)
“I’ve often got the kid in my mind’s eye. She’s a dolichocephalic Trachtenberg, with her daddy’s narrow face and Jesusy look.”
In this example, “Jesus” is transformed into a new form of adjective “Jesusy.” It gives a complete new expression to a noun.
Example #5: Emma (by Jane Austen)
“Let me not suppose that she dares go about, Emma Woodhouse-ing me!”
Austen has invented a verb “woodhouse-ing” from an existing noun “woodhouse,” giving a new shape to an old noun.
Function of Anthimeria
Anthimeria is very common in novels, short stories, and particularly in poetry, where such replacement evokes mild emotions of confusion. However, the proposed meaning is not difficult to recognize from the ways and methods of expression commonly used in literature. It happens in advertisements, because the culture of this world is constantly changing, language must also grow, improve, and develop. Anthimeria, in fact, provides writers a method to describe ideas in a unique way that makes the readers think. Sometimes, writers use a new word to create images and imagery. Besides this, it is a method through which we transform and change our language over time. | <urn:uuid:d39e929f-397d-4e20-86ca-c65e49a047b5> | {
"dump": "CC-MAIN-2017-13",
"url": "https://literarydevices.net/anthimeria/",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218186895.51/warc/CC-MAIN-20170322212946-00480-ip-10-233-31-227.ec2.internal.warc.gz",
"language": "en",
"language_score": 0.9341176748275757,
"token_count": 1147,
"score": 3.28125,
"int_score": 3
} |
By Brad Reagan
The practice of no-till planting has a firm foothold among American farmers. But many of them aren't using it full time—and there are big obstacles that may limit how far it spreads.
Introduced in the early 1960s, no-till didn't hit the mainstream until about 1980, after gas prices spiked. By 2009—the latest figures available—about 35.5% of the country's cropland had no-tillage operations, according to the Department of Agriculture.
Now the agency estimates the practice is growing by 1.5% a year, says John Horowitz, an economist with the USDA's Resource and Rural Economics Division. But those numbers don't tell the whole story.
For one thing, most American farmers—unlike their peers in countries such as Brazil—use no-tilling methods only part of the time. Less than 10% of American farmers are considered "continuous no-till" practitioners, says Tony Vyn, a professor of agronomy at Purdue University.
The rest employ no-till on a selective basis, or use hybrid techniques such as "strip tillage," in which they loosen only those zones of soil where seeds will be planted and leave the areas between them untouched. "That is the route more and more operations are taking," Dr. Vyn says.
Why? Conventional tilling offers one big advantage to farmers: the potential for earlier—and thus longer—planting seasons in certain circumstances.
When there's a lot of rain in late spring, no-till farmers have to wait until the fields dry naturally before they can start planting. But plowing dries out fields, so farmers who use traditional methods can start planting a lot sooner.
Another obstacle to no-till is a bit of a paradox: While the practice is generally perceived as environmentally friendly, it also requires more herbicide use. After all, disrupting the weed cycle is one of the primary reasons farmers plow their fields.
Not only does heavy use of herbicides make some farmers and consumers uneasy, some farmers have reported that weeds are getting increasingly resistant to herbicides. That forces them to find new combinations of weed killers or, in some cases, return to plowing.
Looking forward, researchers are watching closely to see how farmers react to the effects of this year's calamitous drought, which Mr. Horowitz calls "a curve ball" for the adoption of no-till.
No-till farmers typically benefit when droughts hit in July and August, when their fields are able to supply moisture better than fields that have been plowed. This year, though, the drought hit much earlier, landing in full force in May through some major corn-producing states like Indiana. That hurt many no-till farmers, because the roots of their crops hadn't yet been established when the heat wave hit.
But Mr. Horowitz says the long, dry summer will also likely draw attention to no-till's abilities to lessen soil erosion and retain moisture. Meanwhile, if gas prices continue to creep higher, he says, that also would attract more farmers.
"We expect, if anything, [the drought] will accelerate the adoption of no-till," he says. | <urn:uuid:cb6cc525-c884-46be-a36b-28614d9ec54e> | {
"dump": "CC-MAIN-2020-40",
"url": "https://www.no-tillfarmer.com/articles/1941-the-wall-street-journal-weighs-in-on-no-till-adoption-drought",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600402131412.93/warc/CC-MAIN-20201001112433-20201001142433-00740.warc.gz",
"language": "en",
"language_score": 0.9591005444526672,
"token_count": 669,
"score": 2.796875,
"int_score": 3
} |
According to the Anxiety and Depression Association of America, Compulsive Hoarding is defined as “the persistent difficulty discarding or parting with possessions, regardless of their actual value.” For hoarders, the quantity of their collected items is what helps to characterize and distinguish them from other people.
Symptoms & Behavior
As stated by the Anxiety and Depression Association of America, some of the symptoms and behavior that someone who hoards may exhibit includes:
- Inability to throw away possessions
- Severe anxiety when attempting to discard items
- Great difficulty categorizing or organizing possessions
- Indecision about what to keep or where to put things
- Distress, such as feeling overwhelmed or embarrassed by possessions
- Suspicion of other people touching items
- Obsessive thoughts and actions: fear of running out of an item or of needing it in the future; checking the trash for accidentally discarded objects
- Functional impairments, including loss of living space, social isolation, family or marital discord, financial difficulties, health hazards
Hoarding isn’t something that can be fixed by simply “cleaning up the mess.” The clutter created by a “hoarder” is often the physical manifestation of a deeper, more serious issue, which often stems from psychological or emotional problems including:
- Anxiety disorders
- Learning disabilities
- And more
Often, well meaning loved ones will try to fix the problem by cleaning up the mess created by a hoarder, only to have it backfire and cause more damage to their loved one and their relationships. If you believe that you or a loved one in the Upstate NY and Capital Region area are suffering from a Hoarding Disorder, contact Organized by Sharon today.
With years of experience working with various hoarding clients throughout the Capital District, I am trained to work with hoarders and their loved ones. I utilize proven techniques that are tailored specifically to the client and their particular situation and offer non-judgmental, empathetic, and patient hoarding assistance and organization services to help make the process as stress free and go as smoothly as possible. As a member of the Institute for Challenging Disorganization, I also hold a Certificate of Study in Chronic Disorganization and have studied the effects of disorganization and hoarding.
Give me a call at 518-791-5560 to learn more about how I can assist you or a loved one with a hoarding disorder today. | <urn:uuid:6ab1d0b9-92a4-49fe-af65-35ba747c960b> | {
"dump": "CC-MAIN-2017-47",
"url": "http://www.organizedbysharon.com/hoarding-assistance-services/",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934807650.44/warc/CC-MAIN-20171124104142-20171124124142-00396.warc.gz",
"language": "en",
"language_score": 0.9570183753967285,
"token_count": 510,
"score": 2.625,
"int_score": 3
} |
Subjects: Health IT
In the 12 years since our government acknowledged we had a problem with racial disparities in health care, we’ve made significant progress in reducing them. Steep declines in the prevalence of cigarette smoking among African Americans have narrowed the gap in lung cancer death rates between them and whites, for example. Inner city kids have better food choices at school. The 3-decade rise in obesity rates, steepest among minorities, has leveled off.
Still, racial disparities persist across the widest possible range of health services and disease states in our country. The racial gap in colorectal cancer mortality has widened since the 1980s. Overall cancer death rates are 24% higher among African Americans. Sixteen percent of African American adults and 17% of Hispanic adults report their health to be fair or poor, whereas only 10% of white American adults say that. The number of African Americans and Hispanics who report having access to a primary care physician is 30-50% lower than white folks who have one.
How can EMRs Help?
Many studies that rely on EMRs for data collection or care coordination have shown them to have great potential as tools that can reduce racial disparities in health care. For example, a 2009 study showed that post-market surveillance using patient data stored in an EMR could have detected cardiovascular complications from the diabetes drug, Avandia much faster than traditional methods. That’s a plus because African Americans and Hispanics are disproportionately affected by diabetes. Another study showed that patient data from EMRs could identify patients at high risk for domestic abuse, which is more common in some minority populations. A third study showed that EMRs improved care coordination for patients with kidney failure, a condition that disproportionately affects African Americans.
Some of the Federal government’s Meaningful Use criteria may also reduce these disparities, once they fully take effect. The requirement that providers use clinical decision support tools embedded within EMRs holds promise in this regard. CDS tools whose development was underwritten by the Agency for Healthcare Research and Quality incorporate care management strategies designed specifically for minority populations, for example. In addition, Meaningful Use also requires providers to record patient demographic information in the EMR, and this development will likely increase the research value of the patient data contained in these systems.
But There is a Problem
Unfortunately, the National Ambulatory Medical Care Survey suggests that EMR adoption rates are lower among providers who serve minority populations. A study by Jha and colleagues confirmed these findings and also demonstrated that hospitals which served Hispanic and African American patients provided lower quality care. However, among the disproportionate-share hospitals that did use EMRs in Jha’s study, the quality gap disappeared. Jha’s group concluded that EMRs helped mitigate quality issues in hospitals where poor people and minorities received care.
Studies like these prompted David Blumenthal, the National Coordinator for HIT at the time, to implore EMR vendors to help assure that the financial incentives associated HITECH would not create a “digital divide,” in which disproportionate care hospitals fell further behind their brethren.
“It is absolutely necessary that the leading EHR vendors work together, continuing to provide EHR adoption opportunities for physicians and other healthcare providers working within underserved communities of color,” Blumenthal said.
So Where We Stand Now?
So far as I know, only 2 vendors have stepped up to the plate in this regard (please let me know of others!). The first is Practice Fusion, which has a longstanding policy of partnering with free clinics, non-profits and community health organizations to help spread the benefits inherent to EMRs to patients, regardless of their ability to pay. Practice Fusion provides an ad-free version of its EMR along with specialized training, customizations, hardware guidance and implementation assistance to qualified non-profit organizations (Disclosure: I served for 2 years as Sr. VP Clinical Affairs and own stock in this company).
More recently, Quest Diagnostics rolled-out a new initiative in conjunction with HHS’ Office of Minority Health to help improve EMR adoption among small healthcare practices serving medically underserved and minority populations in the Houston area.
Quest is waiving 85% of the cost of EMR licenses for its cloud-based Care360 EMR for 75 qualifying practices (Stark Laws require some payment, according to Quest). The offer includes subscription fees and assistance with education and training.
These companies are to be congratulated. More vendors should step up as well. EMRs hold great promise as tools to reduce racial disparities in health care in our country. | <urn:uuid:d7361429-b3c5-4bc1-8fdb-b9bb2cd3d51a> | {
"dump": "CC-MAIN-2015-27",
"url": "http://www.pizaazz.com/2011/07/11/can-emrs-reduce-racial-disparities-in-health-care/",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375098924.1/warc/CC-MAIN-20150627031818-00017-ip-10-179-60-89.ec2.internal.warc.gz",
"language": "en",
"language_score": 0.9496263861656189,
"token_count": 937,
"score": 3.203125,
"int_score": 3
} |
In this chapter, you’ll design a Hangman game. This game is more complicated than our previous game, but also more fun. Because the game is advanced, you should first carefully plan it out by creating a flow chart (explained later). In the next chapter, you’ll actually write out the code for Hangman.
Hangman is a game for two people usually played with paper and pencil. One player thinks of a word, and then draws a blank on the page for each letter in the word. Then the second player tries to guess letters that might be in the word.
If they guess correctly, the first player writes the letter in the proper blank. If they guess incorrectly, the first player draws a single body part of the hanging man. If the second player can guess all the letters in the word before the hangman is completely drawn, they win. But if they can’t figure it out in time, they lose.
Here is an example of what the player might see when they run the Hangman program you’ll write in the next chapter. The text that the player enters in shown in bold.
The graphics for Hangman are keyboard characters printed on the screen. This type of graphics is called ASCII art (pronounced “ask-ee”), which was a sort of precursor to emojii. Here is a cat drawn in ASCII art:
This game is a bit more complicated than the ones you’ve seen so far, so take a moment to think about how it’s put together. First you’ll create a flow chart (like the one at the end of the Dragon Realm chapter) to help visualize what this program will do. This chapter will go over what flow charts are and why they are useful. The next chapter will go over the source code to the Hangman game.
A flow chart is a diagram that shows a series of steps as boxes connected with arrows. Each box represents a step, and the arrows show the steps leads to which other steps. Put your finger on the “Start” box of the flow chart and trace through the program by following the arrows to other boxes until you get to the “End” box.
Figure 8-1 is a complete flow chart for Hangman. You can only move from one box to another in the direction of the arrow. You can never go backwards unless there’s a second arrow going back, like in the “Player already guessed this letter” box.
Figure 8-1: The complete flow chart for what happens in the Hangman game.
Of course, you don’t have to make a flow chart. You could just start writing code. But often once you start programming you’ll think of things that must be added or changed. You may end up having to delete a lot of your code, which would be a waste of effort. To avoid this, it’s always best to plan how the program will work before you start writing it.
Your flow charts don’t always have to look like this one. As long as you understand the flow chart you made, it will be helpful when you start coding. A flow chart that begins with just a “Start” and an “End” box, as shown in Figure 8-2:
Figure 8-2: Begin your flow chart with a Start and End box.
Now think about what happens when you play Hangman. First, the computer thinks of a secret word. Then the player will guess letters. Add boxes for these events, as shown in Figure 8-3. The new boxes in each flow chart have a dashed outline around them.
The arrows show the order that the program should move. That is, first the program should come up with a secret word, and after that it should ask the player to guess a letter.
Figure 8-3: Draw out the first two steps of Hangman as boxes with descriptions.
But the game doesn’t end after the player guesses one letter. It needs to check if that letter is in the secret word or not.
Branching from a Flowchart Box
There are two possibilities: the letter is either in the word or not. You’ll add two new boxes to the flowchart, one for each case. This creates a branch in the flow chart, as show in Figure 8-4:
Figure 8-4: The branch has two arrows going to separate boxes.
If the letter is in the secret word, check if the player has guessed all the letters and won the game. If the letter isn’t in the secret word, another body part is added to the hanging man. Add boxes for those cases too.
You don’t need an arrow from the “Letter is in secret word” box to the “Player has run out of body parts and loses” box, because it’s impossible to lose as long as the player guesses correctly. It’s also impossible to win as long as the player is guessing incorrectly, so you don’t need to draw that arrow either. The flow chart now looks like Figure 8-5.
Figure 8-5: After the branch, the steps continue on their separate paths.
Ending or Restarting the Game
Once the player has won or lost, ask them if they want to play again with a new secret word. If the player doesn’t want to play again, the program will end. If the program doesn’t end, it thinks up a new secret word. This is shown in Figure 8-6.
Figure 8-6: The flow chart branches when asking the player to play again.
The player doesn’t guess a letter just once. They have to keep guessing letters until they win or lose. You’ll draw two new arrows, as shown in Figure 8-7.
Figure 8-7: The new arrows (outlined) show the player can guess again.
What if the player guesses the same letter again? Rather than have them win or lose in this case, allow them to guess a different letter instead. This new box is shown in Figure 8-8.
Figure 8-8: Adding a step in case the player guesses a letter they already guessed.
Offering Feedback to the Player
The player needs to know how they’re doing in the game. The program should show them the hangman board and the secret word (with blanks for the letters they haven't guessed yet). These visuals will let them see how close they are to winning or losing the game.
This information is updated every time the player guesses a letter. Add a “Show the board and blanks to the player.” box to the flow chart between the “Come up with a secret word” and the “Ask player to guess a letter” boxes. These boxes are shown in Figure 8-9.
Figure 8-9: Adding “Show the board and blanks to the player.” to give the player feedback.
That looks good! This flow chart completely maps out everything that can happen in Hangman and in what order. When you design your own games, a flow chart can help you remember everything you need to code.
It may seem like a lot of work to sketch out a flow chart about the program first. After all, people want to play games, not look at flowcharts! But it is much easier to make changes and notice problems by thinking about how the program works before writing the code for it.
If you jump in to write the code first, you may discover problems that require you to change the code you’ve already written. Every time you change your code, you are taking a chance you create new bugs by changing too little or too much. It is much better to know what you want to build before you build it. | <urn:uuid:0552910e-3f6d-4894-a947-e023f28a51c9> | {
"dump": "CC-MAIN-2017-39",
"url": "https://inventwithpython.com/chapter8.html",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818687711.44/warc/CC-MAIN-20170921082205-20170921102205-00062.warc.gz",
"language": "en",
"language_score": 0.9479176998138428,
"token_count": 1652,
"score": 4.21875,
"int_score": 4
} |
Questions About Remote Learning?
View This Storyboard as a Slide Show!
Create your own!
Like What You See?
This storyboard was created with
Why was the town established?
1st group The first group went back to England.
2nd group Completley dissapered.
What would I do to fix the problems.
The town was established to expand England's territory and to search for gold and other riches.this town was established in 1585 in present day North Carolina.
Major problemsBussiness men were sent not farmers.People were afraid.Sickness people died.The land was bad for farming.The First group went back to England.The second group dissapered.
I would have advised the queen to send farmers and to send more supplies ships.
Over 13 Million
Create My First Storyboard | <urn:uuid:a2a380e6-283c-4fa7-b79d-0d6363bf90d7> | {
"dump": "CC-MAIN-2020-16",
"url": "https://www.storyboardthat.com/storyboards/jocelyn57323/unknown-story",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370506959.34/warc/CC-MAIN-20200402111815-20200402141815-00197.warc.gz",
"language": "en",
"language_score": 0.9549539089202881,
"token_count": 176,
"score": 3.40625,
"int_score": 3
} |
White sea turtles are rare in nature, but teams in Florida have discovered two in as many weeks, the Daytona Beach News-Journal reports.
Nest monitors found one loggerhead hatchling at a nature reserve, and it was strong enough to swim away. But the white sea turtle shown here, which was found near New Smyrna Beach, needed a little extra care.
"All of the other hatchlings had escaped and this one was down there on the bottom," Amber Bridges, a field biologist with Ecological Associates, told the newspaper. "I tried to release it but it was too weak."
She brought the white turtle to the Marine Science Center in Volusia County, where it recuperated and was later released into the wild. The center has helped thousands of turtles and sea birds recover and return to the wild, according to its website.
Although this particular turtle is white due to a lack of pigment, it is not an albino. However, there have been other reported cases of albino sea turtles.
Visit the Daytona Beach News-Journal to read about the difference between albino animals and leucistic animals, such as this turtle.
Sea turtle conservation is an important issue in Florida. Earlier this month, an egg-carrying female hawksbill sea turtle was transported to a hospital in the Florida Keys after being flown to Miami from the U.S. Virgin Islands.
According to the NOAA, most species of sea turtles are endangered, including the loggerhead. A few species, such as the hawksbill turtle, are listed as "critically endangered," according to the International Union for the Conservation of Nature.
Turtle populations in the Gulf of Mexico were greatly impacted by the 2011 BP oil spill, with dead animals reported at 4 to 6 times the normal rate during the months following the ecological disaster. | <urn:uuid:5bb4d9dd-0af9-4518-8054-6d584cb2ea3c> | {
"dump": "CC-MAIN-2014-42",
"url": "http://www.huffingtonpost.com/2012/09/19/white-sea-turtle-hatched-rescued-florida-photo-endangered-rare_n_1894493.html",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1413507445159.36/warc/CC-MAIN-20141017005725-00028-ip-10-16-133-185.ec2.internal.warc.gz",
"language": "en",
"language_score": 0.9777894020080566,
"token_count": 379,
"score": 2.78125,
"int_score": 3
} |
- Inhabitat – Sustainable Design Innovation, Eco Architecture, Green Building - http://inhabitat.com -
Daylit Austrian Kindergarten is a Bright, Open Space That Blends With the Outdoors
Posted By Andrew Michler On September 23, 2011 @ 11:00 am In Architecture,Daylighting,gallery | No Comments
The design is focused on creating an active learning environment where groups can collect or break up within the spaces without being cut off. The interior rooms provide a quiet learning place surrounded by public rooms. The building is placed with the long axes facing west and east, creating an elongated interior/exterior transition to the south.
The best part of the design is the southern face of the school, which has multiple entrances along the stepped back rooms to facilitate connectivity to the outdoors. Tall, operable windows bring daylight and fresh air into the spaces. The exterior is covered in a wooden louvered pergola which protects the glass and the play area from the sun while binding the stepped room sections together. Natural light is another central binding element in the school, which features a copious number of drop-down skylights integrated into the ceiling, adding an unusually lively ambience to the space.
Underground earthtube air intakes provide fresh pre-tempered air, which is heated or cooled before entering the building. The entire school has an underfloor heating system to keep the kids’ toes toasty on winter days. Hot water is provided by an efficient water-to-water heat pump. Rain is captured and reused as well, helping the school to truly integrate into its site.
Via e-architect
Article printed from Inhabitat – Sustainable Design Innovation, Eco Architecture, Green Building: http://inhabitat.com
URL to article: http://inhabitat.com/daylit-austrian-kindergarten-mixes-with-the-out-doors/
URLs in this post:
Neufeld an der Leitha Kindergarten: http://inhabitat.com/index.php?s=kindergarten
new ideas for school design: http://www.centerforgreenschools.org/green-school-interactive.aspx
SOLID Architecture: http://www.solid.ac/index_en.html
skylights: http://inhabitat.com/housing-for-musicians-has-parasitic-rooms-and-skylight-pyramids/
connectivity: http://inhabitat.com/amazing-japanese-kindergarten-circles-around-a-mythic-tree/
pergola: http://inhabitat.com/peaceful-german-kindergarten-has-a-green-roof-with-a-hole-for-a-tree-to-grow-through/
Natural light: http://inhabitat.com/daylighting/
earthtube: http://www.energysavers.gov/your_home/space_heating_cooling/index.cfm/mytopic=12460
e-architect: http://www.e-architect.co.uk/austria/kindergarten_neufeld_an_der_leitha.htm
Copyright © 2011 Inhabitat Local - New York. All rights reserved. | <urn:uuid:f1a4485d-6e35-4d17-9edc-a5fc90b99f03> | {
"dump": "CC-MAIN-2014-23",
"url": "http://inhabitat.com/daylit-austrian-kindergarten-mixes-with-the-out-doors/print/",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997888283.14/warc/CC-MAIN-20140722025808-00045-ip-10-33-131-23.ec2.internal.warc.gz",
"language": "en",
"language_score": 0.8788953423500061,
"token_count": 731,
"score": 2.59375,
"int_score": 3
} |
The Natural Storyteller is full of dynamic story seeds. When you open the book and
read a story seed, you plant it in yourself, unleashing courage, creativity, and love of
nature through true stories of environmental heroes and botanical tales of living trees.
These are stories gleaned from the treasures of world traditions, but re-visioned for
today’s child and told with great energy and panache, including adventures among
birds, animals, and people;. fairytales from the forest and true tales of sea, earth, and sky.
Readers will want to retell these stories immediately, whether at bedtime or around
the campfire under the stars. These stories inspire wonder and service for Mother Earth.
This is a handbook for the nature storyteller, with story maps, brain-teasing riddles,
story skeletons, and adventures to make a tale your own. Here is a vibrant invitation
to embrace a world of stories about nature, animals, and plants and our relationship with them.
Georgiana Keable shows—through a range of techniques and the power of stories—how
to interpret, retell, and pass these stories on for the future. This diverse collection of
stories will nurture active literacy skills and help children form essential bonds with nature.
“[The Natural Storyteller] is life affirming. All of its stories are about taking delight in creation.
It is a journey into storytelling as well as story.” —Hugh Lupton, award-winning storyteller | <urn:uuid:09daad48-8074-4e66-ba43-f4ff10bb70ef> | {
"dump": "CC-MAIN-2018-39",
"url": "http://poetry-bookstore.com/catalog/product_info.php?products_id=1816&osCsid=okbt4g5ag7t80mbe4ik00k3s62",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267158609.70/warc/CC-MAIN-20180922162437-20180922182837-00387.warc.gz",
"language": "en",
"language_score": 0.9059942960739136,
"token_count": 323,
"score": 2.75,
"int_score": 3
} |
Question for Ray Stark regarding tire pressures:
If a tire is pressurized to 100 psi with dry air at sea level where the atmospheric pressure is 14.7 psig, then placed in a vacuum chamber and the chamber evacuated to 0.0 psig, what would be the change in tire pressure be if the temperature in the vacuum chamber was held constant at standard day conditions?
If the test was repeated with the tire pressurized with dry nitrogen rather than dry air, what would be the change in tire pressure?
If both tests were repeated with a temperature reduction near to -65 degrees F at the 0.0 psig pressure, what would be the resultant tire pressures?
Would it be possible for the tire pressures in either case to be greater than 100 psig plus 14.7 psig? If so, how? What tire pressure would explode the typical aircraft tire if it had been correctly pressurized at sea level?
Thanks in advance
... View more | <urn:uuid:782ff605-a460-4a36-afb7-04a67da43fbd> | {
"dump": "CC-MAIN-2021-49",
"url": "https://community.southwest.com/t5/user/viewprofilepage/user-id/55844/user-messages-feed/latest-contributions",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363125.46/warc/CC-MAIN-20211204215252-20211205005252-00543.warc.gz",
"language": "en",
"language_score": 0.952880859375,
"token_count": 206,
"score": 2.9375,
"int_score": 3
} |
The French capital cut its greenhouse gas emissions by 9.2% between 2004 and 2014. But this is not enough to reach its 25% target by 2020. EURACTIV’s partner Journal de l’Environnement reports.
Paris must make more of an effort to cut emissions. According to a study published on 13 July, based on the calculation method used by the French Environment and Energy Management Agency (ADEME), the French capital emitted 25.6 million tonnes of CO2 in 2014. This is just 9.2% less than in 2004.
This effort would see the COP 21 host miss its greenhouse gas (GHG) emissions reduction target of 25% by 2020, fixed in December 2012 by the Climate Energy plan.
One surprising fact to emerge from the study is that one-third of all Parisian GHG emissions come from air travel for business and air freight, while tourist flights are excluded from the calculation.
Goods transport and buildings out in front
The emissions reduction of 9.2% compared to 2004 has largely been achieved by cuts to goods transport emissions (-18%) and building emissions (-15%), through extensive efficiency renovation of the city’s social housing stock.
Emissions from transport in central Paris fell by 39% over the ten-year period, thanks largely to the development of the tram system and the Vélib’ bike rental service.
Energy consumption down slightly
Between 2004 and 2014, the French capital reduced its energy consumption by just 7%, to 31,500 gigawatt hours, again, well off course for its 25% objective for 2020.
Industry accounts for just 5% of emissions from energy consumption in the capital, with the remainder shared between the services sector (51%) and residential consumption (44%). Electricity is the leading energy source, ahead of natural gas and geothermal, which grew steadily over the ten-year period.
In 2014, 15.6% of the energy consumed by the capital came from renewable or recovered sources, up just five percentage points from 2004.
Bad example from the administration
The Parisian administration is far from setting a shining example on GHG emissions. The city’s authorities only managed to cut their carbon footprint by 2% over the decade studied. The city authorities put this poor performance down to an increase in the number of canteen meals it served each year (up by seven million). But without the Climat Energie plan, the city administration’s emissions would have grown by 17%, according to their own projections.
More food, less waste
Emissions from the food chain increased by 10%, while those related to waste fell by 13% between 2004 and 2014. This is due to a slight reduction in household waste. At just 15% in 2014, the recycling rate has stayed low.
“While I am delighted at the progress we have already made in reducing the city’s ecological footprint, I am conscious that we will have to accelerate our work rate in order to meet our COP 21 objectives. This will be addressed in the post-2020 Climate Energy plan, which we will begin this autumn,” said Célia Blauel, the deputy mayor in charge of the regional Climate Energy plan. She sees this as a golden opportunity to push for faster change. | <urn:uuid:83b41cdc-8c1b-4403-85a5-b7953a5cd91b> | {
"dump": "CC-MAIN-2017-34",
"url": "http://www.euractiv.com/section/climate-environment/news/paris-set-to-miss-cop-21-commitments/",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886107490.42/warc/CC-MAIN-20170821041654-20170821061654-00051.warc.gz",
"language": "en",
"language_score": 0.9554751515388489,
"token_count": 676,
"score": 3.140625,
"int_score": 3
} |
“Our user studies have found that user education can help prevent people from falling for phishing attacks. However, it is hard to get users to read security tutorials, and many of the available online training materials make users aware of the phishing threat but do not provide them with enough information to protect themselves. Our studies demonstrate that Anti-Phishing Phil is an effective approach to user education,” stated the Carnegie Mellon University team.
Anti-Phishing Phil was developed by members of the CMU Usable Privacy and Security Laboratory with funding from the US National Science Foundation (Cyber Trust initiative) and ARO/CyLab.
The game design team was led by Steve Sheng and included Alessandro Acquisti, Lorrie Cranor, Jason Hong, Ponnurangam Kumaraguru, Bryant Mangien, and Elizabeth Nunge.
Anti-Phishing Phil can be played here. | <urn:uuid:76b89b85-afc8-409b-8f06-cbb7d5692572> | {
"dump": "CC-MAIN-2018-39",
"url": "https://news.portalit.net/technology/security-news/anti-phishing-game-to-help-raise-awarness-411.html",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267161214.92/warc/CC-MAIN-20180925063826-20180925084226-00536.warc.gz",
"language": "en",
"language_score": 0.956987738609314,
"token_count": 185,
"score": 2.71875,
"int_score": 3
} |
Land Use Regulation
Land ownership and its use are conditionally protected by the United States Constitution. Over the course of almost a millennium of English and American jurisprudence, the law has recognized that certain land uses cause unreasonable harm to neighboring landowners or the public at large.
The law has long recognized trespass, nuisance, both public and private, as well as other legal concepts, as both limiting and protecting the rights of landowners. More recently, both Federal and State statutes have protected the value that natural resources provide, measured as ecosystem services that include clean water and air quality wildlife and fish, and tourism.
The Land Use Regulation resource page covers the intersection where the legal rights of property ownership and use meet the rights of the public and other landowners- by government regulation or private action – to limit certain rights and uses.
Land Use Resources
Commentary on 2018 Changes to NC Voluntary Agricultural District Law The NC General Assembly made an important change to NC’s voluntary agricultural district law, which has resulted in ordinances covering 90 NC counties.
An Overview of Titling of Real Property This short piece reviews the various forms of real property ownership.
A Comment on Tree Fall Liability This short piece, written in response to questions that arise after hurricanes and other epic storms, provides an overview of the law concerning liability for trees that fall and cause damage to another’s property.
Timberlands Transfer and Liability Protection Presentation on property transfer and premises liability.
Voluntary Agricultural District (VAD) Memorandum of Understanding (MOU) DRAFT Template The template may be used for extending the application of a Voluntary Agricultural District (VAD) county ordinance to an incorporated municipal area of the county.
Public Land Resources
Conflict on Public Lands: New-Off Road Vehicle Restrictions on the Outer Banks
Tax Incentives and Land Use
The Agriculture and Resource Economics department has investigated the interface between tax incentives and land use into land trust preserving land into so-called conservation easement. Tax incentives vary by state and are overlaid on top of federal taxes and influence charitable donation of land. This project is led by Wally Thurman.
Land use for recreational activities is another topic being explored. The Cape Hatteras National Seashore in the Outer Banks restricts access to recreational vehicles to protects some habitats. The restrictions create some cost for shoreline fishing and has induced strong opposition from local interests because of lost economic activities. The program looks at the cost and benefits of these restrictions and sheds light on the debate on off-road vehicle rules in national parks. Roger von Haefen leads this project
In the news
Mar 23, 2022
New Fact Sheet Published on Wetlands Law
This publication provides an overview of the historical wetland trends in North Carolina, reviews the evolution and current status of wetland regulations and summarizes the potential impacts of climate change on wetlands in NC.
Feb 9, 2022
New Grant Will Explore the Economics of Hog, Poultry Manure Recycling Technologies
Working with other NC State researchers on a new environmental grant, Eric Edwards will assess the economic outcomes of manure recycling technologies.
Feb 28, 2022
North Carolina Farms Grapple with Labor Shortages
Even with a growing dependence on migrant labor, there still is not enough workers to fill farm jobs across North Carolina.
Dec 14, 2021
NC State Economist: An Update on North Carolina Solar Development and Decommission Policy
Take a look at the current solar energy capabilities in North Carolina and the developing plan for decommission and disposal in the coming years. | <urn:uuid:3e644b7e-8c14-43cf-bbbd-77e6fbfeb959> | {
"dump": "CC-MAIN-2023-23",
"url": "https://cals.ncsu.edu/are-extension/policy-and-regulation/land-use-regulation/",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224644817.32/warc/CC-MAIN-20230529074001-20230529104001-00273.warc.gz",
"language": "en",
"language_score": 0.9126710891723633,
"token_count": 736,
"score": 2.84375,
"int_score": 3
} |
Former Guatemalan dictator Efrain Rios Montt has been found guilty of genocide and crimes against humanity for his role in the country's bloody civil war.
It is the first time a former head of state had been found guilty of genocide in their own country.
- Rios Montt seized power in a 1982 coup and ruled until he was overthrown just over a year later.
- His period in power was the bloodiest of the country's 36-year civil war.
- Accused of implementing a scorched-earth policy in which troops massacred thousands of indigenous villagers.
- Returned to the political limelight when he ran for president in 2003, and again in 2006.
- Was back in public office in 2007 as a member of Congress, which secured him immunity from prosecution over war crimes allegations.
- Immunity expired with the end of his term in office in January 2012.
- Within weeks, he was summoned to court before being tried over the killings of at least 1,771 members of the Maya Ixil indigenous group, in what Amnesty International hailed as the trial of the decade.
More top news
Donald Trump has signed an executive order withdrawing from the Trans-Pacific Partnership trade deal.
Donald Trump made a number of "Day One" promises during the US election campaign. So what were they?
The white sheepdog puppies were pulled out by rescuer workers after surviving five days under tons of snow and rubble. | <urn:uuid:f12b89ad-83dd-49fa-9012-74d056669cdb> | {
"dump": "CC-MAIN-2017-04",
"url": "http://www.itv.com/news/update/2013-05-11/who-is-efrain-rios-montt/",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282935.68/warc/CC-MAIN-20170116095122-00022-ip-10-171-10-70.ec2.internal.warc.gz",
"language": "en",
"language_score": 0.9807015657424927,
"token_count": 298,
"score": 2.59375,
"int_score": 3
} |
Advocates push to list Louisiana's Poverty Point as World
Read the entire article:
Condensed by Native Village
"It was the New York of its time."
So says archaeologist
about the once teeming community at
She believes the United Nations should designate Poverty
Point as a UNESCO
[United Nations Educational,
Scientific and Cultural Organization] Heritage Site.
"This is a big project. It should be listed. But
its listing will also put it in good company with
Stonehenge, the Great Wall, the pyramids and bring in
the oldest communities in the United States, Poverty Point was in its heyday in
1700 B.C. Native Americans lived there as hunters and
gatherers more than 500 years before the
Trojan War, 300 years before King Tut became pharaoh, and about the same time that
Hebrews followed Abraham's great-grandson into Egypt.
After UNESCO experts study and verify the information, the
Poverty Point site will be submitted to UNESCO for voting.
If all goes according to plan, it could become a UNESCO
Heritage Site by June 2014.
Poverty Point was rediscovered in
the 1950s when
archaeologist James Ford noticed earthworks in an aerial
The photo showed a plaza, several mounds, distinct ridges
and a road.
far, archaeologists have only excavated about 2% of the
Poverty Point site. They've learned it was a major hub for
Native Americans and the area was incredibly rich in
wildlife, fish, nuts and other foods.
What they don't know is exactly
who lived there, what is was called, what was traded and why
it was built in the swamp. The soil is very acidic, so
not a single human bone has been found. Even without this
information, Native Americans in and around Louisiana
believe they have a deep connection to these natives.
"Most ... refer to those Native Americans who would
have lived at Poverty Point as their ancestors," Hamilton
UNESCO Heritage Sites
Click map for the
Native Village Home Page
Village © Gina Boltz
To receive email notices of Native Village updates,
please send your email address to:
To contact us, email
Thank you to ALL the wonderful individuals, friends,
organizations, groups, news services and websites who share or donate their research, work, time and
talents to make Native Village possible
In accordance with Title 17 U.S.C. section 107, this material is distributed
without profit or payment for non-profit research, archival, news, and
educational purposes only.
NATIVE VILLAGE website was created for youth, educators, families, and friends
who wish to celebrate the rich, diverse cultures of The Americas' First Peoples.
We offer readers two monthly publications: NATIVE VILLAGE Youth and Education
News and NATIVE VILLAGE Opportunities and Websites. Each issue shares
today's happenings in Indian country. NATIVE VILLAGE also houses website
libraries and informational materials to enrich all lives on Turtle Island.
Unless otherwise noted, articles are written in full by the credited author at
the credited source link. We are responsible for format changes and additional
photos, art, and graphics which boost visual appeal and add dimension to
the reading experience. Pictures and graphics not appearing with the original
article are either credited on the page or by right-clicking the picture. Some
may be free or by sources unknown.
Please contact us with any copyright
corrections so we may properly credit the source.
We are not responsible for changes to outside websites and weblinks. Please
notify us if any problems arise. | <urn:uuid:426b1301-8c7d-4952-91fb-4ce65bc79320> | {
"dump": "CC-MAIN-2018-51",
"url": "http://www.nativevillage.org/Archives/2012/NOV%202012%20News/Advocates%20Push%20to%20LIst%20Louisiana's%20Poverty%20Point%20as%20World%20Heritage%20Site.htm",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376829140.81/warc/CC-MAIN-20181218102019-20181218124019-00576.warc.gz",
"language": "en",
"language_score": 0.9222161173820496,
"token_count": 779,
"score": 3.296875,
"int_score": 3
} |
Learning through Sharing: Open Resources, Open Practices, Open Communication
Teacher Education and Computer-Mediated Communication SIGs joint event
Università di Bologna, Italy
For the third consecutive year (after Lyon and Barcelona), the Teacher Education and CMC SIGs organised a joint annual Seminar, which took place at the University of Bologna on 29 and 30 March. The theme chosen for this year's event was Openness as a way of learning through sharing.
Open Educational Resources (OER) are defined as "materials used to support education that may be freely accessed, reused, modified and shared by anyone" (Downes, 2011). Open Educational Practices (OEP) are practices which "support the production, use and reuse of high quality OER through institutional policies, which promote innovative pedagogical models, and respect and empower learners as co-producers on their lifelong learning path." (ICDE, 2011). Open Communication is reciprocal and respectful exchange which contributes to social presence in online learning (Gunawardena & Zittle, 1997), and the development of intercultural awareness and competence in language learning.
One of the affordances of the web is that it provides easy access to knowledge, and this constitutes one of its greatest potential to transform education. "A culture of sharing resources and practices will help facilitate change and innovation in education" (OER Commons, 2011). Open access initiatives to make research publications freely available online or the adoption of open source software solutions, such as Moodle or Mahara, are already having a significant impact on education. Flickr, iTunes U or YouTube, all based on the idea of sharing content openly, can also provide excellent resources for teachers and learners. The web also offers unprecedented access to interlocutors from different cultures and contexts, and open environments with multimodal channels for communication which can be harnessed for language and intercultural development.
The two-day seminar focused on the impact of adopting openness as a key principle in education. Together, we explored how open resources, open practices and open communication can be integrated in language teaching and learning, and in the initial and continuing development of language teachers.
The main themes discussed were:
- theories that underpin openness as a key principle in education
- using of OER in teaching and/or course development, including reusing and re-purposing existing resources for different contexts or resource-based learning
- integrating learner-generated content into language courses
- developing a culture of sharing amongst the teaching community (barriers to and advantages of sharing)
- sharing resources and/or practices in teacher education (e.g. through peer review of resources)
- sharing resources and intellectual capital with others to raise individual or institutional profiles (e.g. through publishing resources on iTunes U, or through a resource repository, open access publishing of research papers)
- promoting learner communication in 'open' environments (e.g. through online gaming, virtual worlds, international discussion boards, blogs ...)
- facilitating open communication in CMC - where 'sensitive' topics can be broached and diverse opinions are valued.
The nearly eighty proposals arriving not only from all over Europe, but also from Japan, Egypt, India and the USA were a testimony to the interest that the two Eurocall Special Interest Groups have raised in the recent years, as well as to the significance that the concept of Openness is acquiring worldwide.
The new format proposed also proved highly successful: the authors of the forty-one abstracts selected were required to prepare short "working' papers which were made available to all participants one month before the conference. Thus, the presentations " whose purpose was simply to refresh the audience's memory - were reduced to a few minutes, leaving over half an hour per session to the discussion. In addition, each session put together three papers under a common theme, drawing out common issues as well as diverse approaches.
The workshop also included two plenary talks by Eleonora Pantò, who provided an overview of the Openness movement in education, and Russell Stannard, who demonstrated tools that can be used in language teaching and learning.
A selection of the papers will also be published in two Special Issues (one dedicated to CMC and Open Communication, and one to OERs and OEPs) of the open-access Journal of e-Learning and Knowledge Society. An e-book of case-studies of Open Educational Resources and Practices, targeted at practitioners, is also in preparation.
For further information, http://eurocallsigsbologna.weebly.com
Downes, S. (2011). Open Educational Resources: A Definition. In Half an Hour (blog) http://halfanhour.blogspot.com.es/2011/07/open-educational-resources-definition.html
Gunawardena, C. N., & Zittle, F. J. (1997). Social presence as a predictor of satisfaction within a computer-mediated conferencing environment. The American Journal of Distance Education, 11(3), 8-26.
ICDE (2011). Definition of Open Educational Practices. http://www.icde.org/en/resources/open_educational_quality_inititiative/definition_of_open_educational_practices/
OER Commons (2011) OER Community. http://www.oercommons.org/community
Metrics powered by PLOS ALM
- There are currently no refbacks.
This journal is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 License.
Universitat Politècnica de València
e-ISSN: 1695-2618 http://dx.doi.org/10.4995/eurocall | <urn:uuid:2f762710-872b-4c3d-8bcf-0985a1040176> | {
"dump": "CC-MAIN-2021-21",
"url": "https://polipapers.upv.es/index.php/eurocall/article/view/11382/11062",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243992721.31/warc/CC-MAIN-20210513014954-20210513044954-00119.warc.gz",
"language": "en",
"language_score": 0.9193033576011658,
"token_count": 1197,
"score": 2.90625,
"int_score": 3
} |
Research & Education : Working with Teachers
Teachers can enliven their students’ understanding of local history through guided tours or special research projects. We offer tours of our museum exhibits, the Middletown Heritage Trail and local historic graveyards, appropriate for students in grade four and above. We also can advise teachers on special student research projects as well as ways to incorporate local historic artifacts and documents in their course curriculum.
For more information, contact us at (860) 346-0746 or e-mail us.
Workbook – A Brightly Colored Past
Ask most young people if slavery existed in New England, and nine times out of ten, they’ll answer, “Of course not, only the South had slaves.” But fourth graders in Middletown have a better understanding of this sad chapter in American history, thanks to A Brightly Colored Past, a workbook on local African-American history produced by the Middlesex County Historical Society.
The first of its kind in Connecticut, A Brightly Colored Past examines the rich history of African Americans in Middletown and the surrounding area from the Colonial era to the 1960s Civil Rights movement. The 46-page workbook has been used since 1994 in the city’s eight elementary schools as part of the curriculum on state history.
A Brightly Colored Past brings history alive with its many stories of local heroes. Among them is the African-born Venture Smith, a slave, who after years of hard work was able to buy his freedom and that of his wife and children. They lived the rest of their lives in Haddam Neck. Another chapter chronicles Revolutionary War solider Kay Cambridge, one of the many African-American men from Middletown who fought for the establishment of this country. Also told is the story of Prudence Crandall of Canterbury who defied established practices to teach black children to read and write.
Chockfull of games, puzzles and activities, A Brightly Colored Past makes history accessible to young minds. A maze game, for example, teaches children about the Underground Railroad. Kids have to find their way to freedom without stumbling into the hands of the slave owner or slave catcher. A hidden-word puzzle re-enforces the lesson that few slaves could keep their African first names; most had them changed by their owners.
“To teach children history you have to relate things from kids’ everyday lives to the lives of people in the past,” says former Historical Society Director Di Longley, who researched and wrote the workbook. “Previously, we had few signposts for African-American kids that pointed them towards their past. But as with all maps, the benefit is for all travelers in the community. The African-American history of Middletown is everyone’s history.” | <urn:uuid:4954788d-ce71-4aea-922a-35fa97df97ed> | {
"dump": "CC-MAIN-2022-49",
"url": "https://mchsctorg.wordpress.com/research-education/working-with-teachers/",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711221.94/warc/CC-MAIN-20221207221727-20221208011727-00205.warc.gz",
"language": "en",
"language_score": 0.9679555296897888,
"token_count": 590,
"score": 3.734375,
"int_score": 4
} |
What is Penetration Testing? [A Brief Explanation]Home Glossary Tech Terms What is penetration testing? By Rakesh Patel Last Updated: June 26, 2023 Table of ContentsWhat is Penetration Testing?What is the Importance of Penetration Testing?5 Steps of Penetration Testing ProcessWhich Techniques Are Used for Penetration Testing? Which Tools Are Used for Penetration Testing?Difference Between Manual and Automated Penetration TestingWhat is Penetration Testing?Penetration testing is a type of security testing conducted on a software app, network system, or computer system to identify potential vulnerabilities that could be exploited by attackers.The primary objective of penetration testing is to secure sensitive data from threats by identifying system weaknesses and addressing them instantly. There are three main types of pen testing which include black box testing, white box testing, and gray box testing. Penetration testing is indeed a part of non-functional testing, which helps you to know to make the software secure from all non-functional aspects that include performance, usability, and compatibility.Similar to functional testing, conducting all types of non-functional testing is also important. If you are unaware of the types of non-functional testing, read our detailed guide on software testing types which help you to get all the essentials about software testing.What is the Importance of Penetration Testing?Let’s know how penetration tests are important before you launch the software to servers.Identifying the VulnerabilitiesAs a tester, penetration testing helps you to identify vulnerabilities that could be exploited by attackers. The vulnerabilities might exist in operating systems, services, application flaws, improper configurations, or risky end-user behaviour. Before you make the software live, you must check all the security aspects of the software solution.Validation of Security MeasuresPenetration testing helps validate the efficiency of defensive mechanisms and adherence to security policies and compliance requirements. By conducting pen tests, you get an independent and objective view of the network, system, and application security. This way, you can help organizations to understand their security posture more accurately.Prevention of Financial LossBy identifying and addressing vulnerabilities before attackers exploit them, penetration testing can save an organization from the potential monetary losses associated with a breach. These losses could include fines, recovery costs, and lost revenue due to downtime or reputation damage to the brand.Protection of Client Trust and Company ReputationA breach can lead to a loss of sensitive customer data, which can severely damage a company’s reputation. That’s where penetration testing helps organizations protect their reputation and maintain customer trust by securing data. Due to the penetration test, you can find the issues and fix them to make sure the data is secure.Compliance RequirementsRegulations like the GDPR, PCI-DSS, and HIPAA, require regular penetration testing as part of their compliance requirements. That’s why organizations or businesses have to conduct testing by hiring penetration testers and if they fail to conduct these tests, the organization may face hefty fines and penalties.Proactive ApproachPenetration testing provides a proactive way to address security before incidents occur. It is always better to identify and fix critical security vulnerabilities proactively rather than respond to a security breach after it has occurred.Real-world ScenarioPenetration testing mimics real-world attack scenarios, offering an organization a real-time analysis of its security posture. This practical approach provides more significant insights than theoretical assessments and can reveal how effectively the organization’s defences can protect against an actual attack.5 Steps of Penetration Testing ProcessPlanning and ReconnaissanceAt the initial stage, you define the scope and goals of the pen test. Within the scope, you include information like the systems or software to be tested and testing methods. As security personnel, you need to collect the relevant information about the target system, which might include network and domain names, mail servers, etc. Any potential legal implications are addressed at this point, and permission is obtained for the planned testing.ScanningIn this stage, the penetration tester interacts with the target system by sending data to it and analyzing its responses. This process is carried out using various manual and automated testing tools and methods.Static analysis: Reviewing the code of the software to estimate the way it behaves while running. Static can be done using tools to automatically scan the whole code in a short time.Dynamic analysis: Inspecting the code of the software in a running state. Dynamic analysis is more practical to know the real-time view of the software’s performance.Gaining AccessBeing a pen tester, you use web application attacks such as SQL injections, cross-site scripting, and backdoors to uncover a system’s vulnerabilities. The purpose to gain access to the software and its authorization is not just the exploitation of these vulnerabilities but also to understand the extent of the damage that can be caused.Maintaining AccessAt this stage, the penetration tester tries to imitate a potential attacker by maintaining a presence in the exploited system. The aim is to see if the vulnerability allows persistence in the system as this mimics what an actual attacker would do, potentially leading to further exploitation over time. This can often involve escalating privileges, gathering additional credentials, and pivoting to other systems.Analysis and ReportingThis is the final stage where you compile a comprehensive penetration testing report. You include information in the report:The vulnerabilities found with their nature and locationThe potential impacts of the vulnerabilitiesRecommendations to address each vulnerabilityDetailed findings of the penetration test, including tools used, methods applied, test sequences, and the outcome of each testThe report aims to provide the organization with a clear understanding of its software’s weaknesses and actionable steps it can take to improve its security.Which Techniques Are Used for Penetration Testing?Here is the list of techniques that are used to perform the pen test.Social engineering: This technique involves manipulating individuals into revealing sensitive information. This might involve phishing (using emails to trick users into revealing credentials), vishing (voice calls to trick users), and in-person social engineering.Packet sniffing: Packet sniffers are used to capture data packets travelling over a network. This can help reveal sensitive information and identify potential areas of vulnerability.Vulnerability scanning: This technique involves using automated software to scan a system for known vulnerabilities, including insecure software configurations, outdated software with known exploits, and dangerous default settings.Password cracking: This involves attempting to crack a user’s password to gain unauthorized access to a system. Tools can use methods like dictionary attacks, brute force attacks, or rainbow table attacks.Network mapping: This is the process of discovering and visualizing nodes and pathways in a network. It can help a penetration tester understand how systems are interconnected and identify potential targets for exploitation.SQL injection: With this technique, you inject malicious SQL code into a database query. If the database is not properly secured, you can view sensitive information or manipulate database contents.Cross-site scripting (XSS): Using the XSS technique you inject malicious scripts into trusted websites. An attacker can use XSS to steal the session cookie and impersonate the user.Privilege escalation: This involves exploiting a vulnerability in a system or application to gain elevated access to resources that are normally protected from an application or user.Malware injection: This involves inserting malware into a system to create a backdoor, record keystrokes, or perform other malicious actions.DNS poisoning or spoofing: This technique involves introducing corrupt Domain Name System data into the DNS resolver’s cache, causing the name server to return an incorrect IP address and divert traffic.Moreover, to ensure your penetration testing, you can conduct cross-platform testing as part of the penetration testing process. This way, you can identify and address vulnerabilities that may affect their software’s security on different platforms.This helps ensure comprehensive security coverage and provides confidence in the application’s ability to protect sensitive data regardless of the platform it is running on. If you want to learn more, read our introduction post on cross-platform testing.Which Tools Are Used for Penetration Testing?Pen testing tools are software applications used to discover, analyze, and exploit vulnerabilities in a system to assess its security.MetasploitNmapWiresharkBurp SuiteNessusOWASP ZAP (Zed Attack Proxy)SQLMapAircrack-ngJohn the RipperKali LinuxSo this is the list of penetration testing tools used for finding the security weaknesses in a software, web, or mobile application.Difference Between Manual and Automated Penetration TestingCheck the following table to learn the difference between manual and automated pen tests.ParameterManual Penetration TestingAutomated Penetration TestingScope and SpeedWith manual testing, you can deeply examine smaller systems, but it may take longer due to the need for human involvement.Automated testing can swiftly scan large systems or networks and identify known vulnerabilities rapidly.Vulnerability DetectionManual testing excels at discovering complex, logic-based vulnerabilities and new threats that automated tools may overlook.Automated testing is highly effective at identifying common, well-known vulnerabilities and system misconfigurations.Human InterventionManual testing requires significant human effort, expertise, and time, as testers need to manually probe the systems.Automated testing requires minimal human involvement once the software is set up, aside from analyzing the results.AdaptabilityManual testing provides high adaptability, as human testers can quickly change their strategies based on the system’s responses.Automated testing exhibits lower adaptability, as it operates on pre-set configurations and may not adapt well to unique scenarios.CostManual testing can be more expensive due to the extensive time and human resources required.Automated testing is typically less expensive, especially for large systems, due to reduced human involvement.In conclusion, penetration testing is a vital process that identifies and addresses potential system vulnerabilities to enhance security and comply with regulations, using both manual and automated methods.Subscribe0 Written byRakesh PatelRakesh Patel is the Founder and CEO of Space-O Technologies (Canada). He has 28 years of IT experience in business strategies, operations & information technology. He has expertise in various aspects of business like project planning, sales, and marketing, and has successfully defined flawless business models for the clients. A techie by mind and a writer at heart, he has authored two books – Enterprise Mobility: Strategy & Solutions and A Guide To Open311×Join our subscribers' list now! Get top insights and news on latest technologies and trends right to your inbox. | <urn:uuid:677d7b35-995a-477f-b7e0-6d8db90a2bdd> | {
"dump": "CC-MAIN-2023-40",
"url": "https://www.spaceo.ca/glossary/tech-terms/what-is-penetration-testing/",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233511351.18/warc/CC-MAIN-20231004020329-20231004050329-00364.warc.gz",
"language": "en",
"language_score": 0.9085992574691772,
"token_count": 2139,
"score": 3.140625,
"int_score": 3
} |
Hydraulic pumps are a major component of hydraulic machinery. Hydraulic pumps refer to pumps that are used primarily to deliver hydraulic liquids such as oil or water to the pump outlet. An interesting feature of hydraulic pumps is that when they are powered by hydraulic liquids, they can also function as motors.
In a fluid-power system the pump is the power source. It is generally operated by an electric motor or an internal combustion engine. The pump uses fluid in a reservoir and then pumps it to an actuator which performs the work of the fluid-power system. The pump works in a rotary fashion. As the pump rotates, it produces a vacuum on the inlet side, which enables the fluid to flow into the pump. The pump then ejects the fluid at a higher pressure than the atmospheric pressure around it. The standard hydraulic fluid that is used in these types of pumps is petroleum oil. However, various types of non-flammable fluids are now being used more often for safety considerations.
Most hydraulic pumps are rated between 500 to 15,000 pound force per square inch (psi), with most continuous-service pumps falling within the 2,000 to 4,000 psi range. Hydraulic piston pumps operate in the higher range for intermittent peak loads. Because hydraulic pumps have such a wide range they are commonly used at home as well as in the industrial sector.
While choosing the appropriate hydraulic pump, some important operational specifications and considerations need to be kept in mind. Hydraulic pumps are mainly differentiated on the basis of the technological differences and the mechanisms that are used to operate them. Some common types of hydraulic pumps are: radial piston, external and internal gear, and vane. A typical hydraulic pump has three to four stages in its operations. Operational specifications such as: the operating temperature, the continuous operating pressure, the maximum operating pressure, the horsepower, the operating speed and the maximum fluid flow must be considered when choosing hydraulic pumps.
Hydraulic pumps are normally made from sturdy materials as they typically experience heavy wear and tear. The air temperature as well as the temperature of the pump will determine the density of the fluid and how well the pump works. Because of this you shouldn’t use the pump in extreme temperatures or let the pump operate to a high temperature.
There are several trusted names in the world of hydraulic pumps. One of the best is Bosch Rexroth. Rexroth pumps are reliable and provide good value for their price. | <urn:uuid:4ba3c176-4db8-444b-bf96-697aa3524866> | {
"dump": "CC-MAIN-2016-50",
"url": "http://www.industrial101.com/equipment/hydraulic-pumps.aspx",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698541696.67/warc/CC-MAIN-20161202170901-00067-ip-10-31-129-80.ec2.internal.warc.gz",
"language": "en",
"language_score": 0.9541140794754028,
"token_count": 500,
"score": 3.65625,
"int_score": 4
} |
Honey lemon water has many health benefits due to its abundance of vitamins, nutrients, and antioxidants. Raw, unprocessed honey has antimicrobial properties and has been used for centuries to treat many health conditions. Lemons and lemon juice have high levels of vitamins and antioxidants that can help to strengthen your immune system, prevent infections, and keep your heart healthy. Click the link to read more knowledge Momentum about health.
Drinking warm honey and lemon water helps to boost your health and provide you with energy. In fact, many people drink warm honey-lemon water in the morning to give their metabolism a healthy boost.
In this article, you will learn about the many reasons why honey lemon water is so good for you. You will also find out how scientific research backs up many of the health claims of honey and lemon.
The Health Benefits of Lemon Water
Lemon is a tangy citrus fruit that is packed with goodness. In fact, consuming a glass of warm water with the juice of a freshly squeezed lemon can provide you with up to 50% of your recommended daily vitamin C intake.
Lemons are low in calories and carbohydrates. For example, the juice from one lemon contains 11 calories and 4 g of carbs. Lemon juice also contains folate, vitamin A, calcium, potassium, and magnesium.
One of the main benefits of consuming diluted lemon juice with honey is that lemons are rich in antioxidants. This is why a lot of people drink honey and lemon water for detox. Scientists say that citrus fruits like lemons are a rich source of vitamin C. However, other compounds in citrus juices like flavonoids also play an important role in keeping your body healthy.
You can also boost the antioxidant properties of lemon honey tea by adding some grated lemon peel. Research has revealed that lemon peel is also a rich source of antioxidants with antimicrobial properties | <urn:uuid:79504220-d42d-42b2-b0fa-83e123542f0d> | {
"dump": "CC-MAIN-2019-35",
"url": "https://apkcorners.com/knowledge-of-health-in-knowledge-momentum/",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027316075.15/warc/CC-MAIN-20190821152344-20190821174344-00404.warc.gz",
"language": "en",
"language_score": 0.9450661540031433,
"token_count": 380,
"score": 2.71875,
"int_score": 3
} |
The Age of Alice: Fairy Tales, Fantasy, and Nonsense in Victorian England
On exhibit February - May, 2015
This year marks the 150th anniversary of the publication of one of the world’s most famous works of fantasy: Lewis Carroll’s Alice’s Adventures in Wonderland. The first copies of the book were printed in July of 1865, to great success. In later years, other editions appeared, with new presentations. Alice’s Adventures in Wonderland marked a key transition in literature, but other works incorporating fairy tales or elements of fantasy had appeared decades before and continued to appear throughout the century. Read more. | <urn:uuid:eb7321a6-396f-464a-8e41-b8c8cf1fe90f> | {
"dump": "CC-MAIN-2018-51",
"url": "https://specialcollections.vassar.edu/exhibit-highlights/2011-2015/age-of-alice/",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376827596.48/warc/CC-MAIN-20181216073608-20181216095608-00614.warc.gz",
"language": "en",
"language_score": 0.9421631693840027,
"token_count": 130,
"score": 2.953125,
"int_score": 3
} |
During the time Józef Piłsudski was the dictator of Poland, most of Polish planning concentrated on contingences in case of a possible attack from the East. It was only after Piłsudski's death in 1935 that the new Polish government and military reevaluated the situation and decided that the current Polish plan for a Polish-German war, dating from the mid-1920s (Plan "S"), was inadequate and needed to be revised. However up to 1938, the priority was war in the East, not the West, and a majority of Polish fortifications were being erected on the Polish-Soviet border.
The first version predicted that Germans would attack from Pomerania towards Warsaw, with supporting thrusts from Silesia and Prussia, aiming at establishing an early link through the Polish Corridor between German Pomerania and Prussia. After German annexation of parts of Czechoslovakia and changes of borders, Polish planners revised the plan with the expectation that a main thrust would originate from Silesia - through Piotrków and Łódź towards Warsaw and Kraków. The Polish planners correctly predicted the direction of most German thrusts, with one crucial exception: they assigned low priority to a possible deep, flanking, eastward push from Prussia and Slovakia, a push that was however assigned high priority in the German plan (Fall Weiss).
A controversy involved the decision whether Polish forces should defend the lengthy borders, or withdraw east and south and try a defense along a shorter line, backed with rivers. Although the second plan was more militarily sound, political considerations outweighed them, as Polish politicians were concerned that Germany could be satisfied with occupation of some disputed territories (like the Free City of Danzig, the Polish Corridor and Silesia), and push for an early end of the war after occupying those territories. The western regions were also the most densely populated and had major industrial centers, crucial for mobilization and any continued military production of equipment and supply for the Polish Army.
Even with the decision to protect the borders, due to Poland being virtually encircled from three sides by the Germans, it was decided that some areas had to be abandoned early on, as their defence would be next to impossible. Thus the north-west Pomorze Voivodship and Poznań Voivodship were to be abandoned early on, with a separate force, the Land Coastal Defence protecting key parts of the coast as long as possible, and most of the surface Polish Navy evacuated to the United Kingdom as specified in the Peking Plan (submarines were to engage the enemy in the Baltic Sea as per the Worek Plan). The main Polish defence line was to be formed on the regions of the Augustów Primeval Forest - Biebrza River - Narew River - Vistula River (and the towns of Modlin, Toruń, Bydgoszcz) - Inowrocław lakes - Warta River - Widawka River - town of Częstochowa - Silesian fortifications - town of Bielsko-Biała - town of Żywiec - village of Chabówka - and the town of Nowy Sącz). The second defensive line was based on the Augustów Forest - Biebrza River - Narew River - Bug River - Vistula River - and Dunajec River. Finally, the third defensive line involved retreating southeast towards the Romanian border, and holding as long as possible in the Romanian bridgehead region.
The plan assumed the Soviet Union would be neutral, as a Nazi-Soviet alliance seemed unlikely. The plan however allowed for a Lithuanian attempt to take Wilno, a city disputed between Poland and Lithuania, and a small Polish force - primarily elite units of Border Defence Corps - was detached to secure that region.
The plan assumed that Polish forces would be able to hold for several months but due to German numerical and technical superiority would be pushed back (it was estimated Germans would have two to threefold advantage), until pressure from Western Allies (France and United Kingdom) who were obliged (through the Franco-Polish Military Alliance and Polish-British Common Defence Pact) to launch an offensive from the West would draw enough German forces away from the Polish front to allow Polish forces to launch a counteroffensive.
The plan correctly assumed the size, location and most directions of attack by the enemy. By the time of the German attack, however, the second and further defensive lines and related items were not fully defined by the plan, nor had any of its aspects been subject to a military exercise. There were also other unfinished parts, particularly dealing with communications and supplies.
When Germany invaded Poland on 1 September 1939, Polish forces were dealt a significant defeat at the Battle of the Border, just as the critics of the plan predicted. Further factors, such as underestimating German mobility and blitzkrieg strategy, and overestimating Polish mobility, the Soviet invasion of Poland and lack of promised aid from the Western Allies, contributed to the Polish forces' defeat by 6 October 1939.
- Plan Wschód (Plan East), a Polish defensive plan in case of an attack by the Soviet Union
- Seidner, Stanley S. (1978), Marshal Edward Śmigły-Rydz Rydz and the Defense of Poland, New York, OCLC 164675876
- Dunn, John P. (1987), Polish Defense Planning, 1919-1939: Myth vs. Reality (Dissertation), Florida Atlantic University, OCLC 18811808
- (Polish) Plan "Zachód" | <urn:uuid:be21b0f2-08be-4054-ba65-598c84bbee9e> | {
"dump": "CC-MAIN-2014-52",
"url": "http://en.wikipedia.org/wiki/Plan_West",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802773066.29/warc/CC-MAIN-20141217075253-00108-ip-10-231-17-201.ec2.internal.warc.gz",
"language": "en",
"language_score": 0.9652597904205322,
"token_count": 1171,
"score": 3.59375,
"int_score": 4
} |
Definition of lèse–majesté
1a : a crime (such as treason) committed against a sovereign powerb : an offense violating the dignity of a ruler as the representative of a sovereign power
2 : a detraction from or affront to dignity or importance
Origin and Etymology of lèse–majesté
Medieval French lese majesté, from Latin laesa majestas, literally, injured majesty
First Known Use: 1536
Seen and Heard
What made you want to look up lèse–majesté? Please tell us where you read or heard it (including the quote, if possible). | <urn:uuid:5f2baccd-5843-4afb-9af8-c581f58bc32b> | {
"dump": "CC-MAIN-2017-04",
"url": "https://www.merriam-webster.com/dictionary/lese-majeste?show=0&t=1293657888",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285244.23/warc/CC-MAIN-20170116095125-00394-ip-10-171-10-70.ec2.internal.warc.gz",
"language": "en",
"language_score": 0.8458623886108398,
"token_count": 139,
"score": 3.234375,
"int_score": 3
} |
To access this work you must either be on the Smith College campus OR have valid Smith login credentials.
On Campus users: To access this work if you are on campus please Select the Download button.
Off Campus users: To access this work from off campus, please select the Off-Campus button and enter your Smith username and password when prompted.
Non-Smith users: You may request this item through Interlibrary Loan at your own library.
Asian Americans-Political activity, Asian Americans-Ethnic identity, Asian American arts, Hip-hop, Discrimination against Asian Americans, Activism, Asian American, Spoken word, Legal subjugation, Racism, Discrimination, Resistance, Collective community racialization
Activism is a form of protest and contestation. In 1968, Asian American activism is defined by political protests for a collective identity. In 2011, as newer ethnic groups immigrate to the United States of America, the definition of Asian America has changed; the definition of activism has changed. As Asian Americans try to reconcile and redefine their collective identity, many are utilizing social media and art as a way of activism. However, what are some of the consequences for claiming a political identity created 43 years ago? Are we still a collective group fighting for common goals, or are we romanticizing an identity that no longer exists? What do we lose and what do we gain for claiming this identity? How can we (re)define Asian America through time, through loss, and through reconciliation?
Lei, Judy J., "Reminisching with rhymes : (re)imagining (r)evolution within Asian America through arts and activism and Dividing lines, a play" (2011). Honors Project, Smith College, Northampton, MA.
Off Campus Download | <urn:uuid:655f2e57-fe1a-4208-8fc7-ded81ef4f202> | {
"dump": "CC-MAIN-2021-21",
"url": "https://scholarworks.smith.edu/theses/316/",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988923.22/warc/CC-MAIN-20210508181551-20210508211551-00368.warc.gz",
"language": "en",
"language_score": 0.8901963829994202,
"token_count": 362,
"score": 2.515625,
"int_score": 3
} |
Interested in the health benefits of turmeric? This magical spice is one of the best foods (or supplements) you can consume for your health. Turmeric is a plant that is common in South Asia, particularly India where it is widely used in the production of spices, and is the key ingredient in curry powder that gives the powder its yellow shade. The spice is also known for its medicinal properties and has been used in India for centuries as a natural remedy for a multitude of ailments.
The Health Benefits of Turmeric
While commonly used medicinally for decades, turmeric as an herbal medicine caught the attention of the modern world only recently. Scientists have only begun revealing the exact health benefits of this plant. Research and history shows how turmeric can be utilized to help treat or prevent many of the most common and serious health conditions. Here are some of the top health benefits of turmeric:
- Turmeric powder has antibacterial and anti-inflammatory properties that make it an ideal antiseptic used in home remedies for wounds.
- Possibly one of the most impressive of the top health benefits of turmeric, research has repeatedly shown that turmeric is a powerful cancer fighter. Curcumin is a naturally powerful anticancer compound found in turmeric that has been shown to decrease brain tumor size in animals by 81 percent in more than 9 studies. Researchers at UCLA have even found that curcumin is able to block cancer growth.
- Further adding on to turmeric’s cancer-fighting abilities, the spice has also been shown to help prevent breast cancer. Curcumin has been found to possess properties that reduce the expression of deadly molecules within cancer cells, and can potentially slow the spread of breast cancer.
- Turmeric can be used to naturally detoxify the liver.
- The spice is a natural painkiller.
- Research is beginning to show that turmeric may be effective at protecting against neuro-degenerative diseases such as Alzheimer’s disease. Epidemiological studies show that levels of neurological diseases like Alzheimer’s are very low in elderly Indian populations, where turmeric is a common spice.
- May be beneficial in treating psoriasis.
- Patients with myeloma could possibly be treated with turmeric in the near future.
More studies are currently being performed to reveal other health benefits of turmeric. Whether you use it as a supplement or to spice up your favorite dishes, turmeric will assist in keeping you healthy. As studies progress, turmeric is expected to be a key ingredient in the prevention and treatment of many of today’s diseases.
“Since curcumin is an antioxidant, anti-inflammatory and lipophilic action improves the cognitive functions in patients with AD. A growing body of evidence indicates that oxidative stress, free radicals, beta amyloid, cerebral deregulation caused by bio-metal toxicity and abnormal inflammatory reactions contribute to the key event in Alzheimer’s disease pathology,” says a study on PubMed.
||Anthony is a natural health and human empowerment writer, speaker, and entrepreneur whose writings have appeared in #1 USA Today and Wall Street Journal Best-Selling books and top 100 websites.
After overcoming Lyme Disease and nerve-related facial paralysis, Anthony's work now reaches several million readers per month through his highly prolific group of social media pages and websites.
Focused on self-development techniques and living a healthy lifestyle, Anthony currently sits on the Advisory Board to Natural Society in addition to managing and directing several other companies dedicated to enhancing social good.
Anthony's work routinely appears on both alternative and established websites and television programs alike, including Drudge Report, Thom Hartmann, Simple Reminders, RT, Infowars, Michael Savage, Gaiam TV, and many others. | <urn:uuid:874f6607-535c-4c99-851b-c66bb53dda8a> | {
"dump": "CC-MAIN-2017-30",
"url": "http://naturalsociety.com/overcoming-pharmaceuticals-top-health-benefits-of-turmeric/",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549424610.13/warc/CC-MAIN-20170723202459-20170723222459-00343.warc.gz",
"language": "en",
"language_score": 0.9457306861877441,
"token_count": 765,
"score": 2.640625,
"int_score": 3
} |
When it comes to deadly, contagious disease outbreaks like Ebola, the terms "quarantine" and "isolation" take on fresh relevance and urgency. Each has a distinct meaning in the public health context, though the words are often used interchangeably and both refer to protecting the public from communicable illnesses.
Relying on quarantine is a centuries-old strategy to separate the healthy from the sick in hopes of containing infectious disease. The first known formal quarantines went into effect in 14th century Europe to stop the spread of plague, known as Black Death. (The word "quarantine" itself is derived from the Italian quaranta, 40, referring to the number of days the ill were kept apart from everyone else). In 18th- and 19th-century America, quarantines were imposed during outbreaks of yellow fever and cholera. The U.S. Centers for Disease Control and Prevention (CDC) now maintains 20 quarantine stations in the United States, which can detain and examine people — and animals — believed to be carrying dangerous infectious diseases.
A quarantine goes into effect when people have been exposed to an infectious disease but "may or may not become ill," according to the CDC. Since it's not yet known whether they are infected, they're separated from the general population to prevent possible spread of the disease. Pandemic influenza, SARS, cholera, diphtheria, tuberculosis, plague, smallpox, yellow fever and viral hemorrhagic fevers (such as Ebola) are all subject to quarantine in the United States.
Isolation of patients, a more extreme step, can be imposed when people have already fallen ill. In the U.S., this generally means that patients are confined to medical facilities, visits by others are severely restricted, and medical personnel are required to wear protective gear.
Two American Ebola patients are in isolation at Atlanta's Emory University Hospital, home to one of four U.S. "patient biocontainment units," super-charged intensive care units that are specially equipped to handle the most serious cases. Dr. Bruce Ribner, director of Emory's Serious Communicable Disease Unit, explains how things work at his hospital's special isolation unit:
How is this unit different from an ordinary U.S. medical isolation facility — what is the equipment and infrastructure that makes it unique?
The four Patient Biocontainment Units in the United States have a combination of factors to control the spread of infectious pathogens that are not found together in any other units around the country. The air pressure is negative so that air flows from the hallway to the anteroom to the patient room. The room is designed as an ICU so that patients with any degree of illness can be safely cared for. The rooms have large anterooms and a biosafety cabinet for specimen processing.
The rooms have 20 air changes per hour so that all infectious particles are rapidly removed. Air flow is laminar in nature, which means it flows from the supply vent to the return with little potential for mixing. Air is HEPA [high efficiency particulate air] filtered before being exhausted.
What is the significance of a negative air pressure system for medical isolation — in other words, why is the special air filtration so important?
Negative air pressure means that air moves from the hallway to the anteroom to the patient room, and not in the reverse direction. Since Ebola virus is not spread through the air, this feature is not important for the treatment of patients with Ebola virus infection. However, for diseases that are spread via the air, negative air pressure ensures that air carrying infectious particles does not spread to the hallway or other parts of the hospital, thus preventing spread of infection to visitors, patients or health care workers.
What exactly must medical staff wear when they enter the facility? Does it take a long time to gear up?
We care for the patients using personal protective equipment designed to prevent our staff from coming into contact with blood, body fluids and large respiratory droplets. If the patients are having lots of diarrhea or vomiting we use Tyvek suits. Otherwise we use standard gowns and gloves. We use masks and either face shields or goggles to prevent exposure to respiratory droplets. We have never timed the putting on or taking off of personal protective equipment, but it doesn't take that long.
How are these and any medical equipment disposed of or sanitized after medical staff leave the unit?
All disposables are autoclaved [sanitized via pressurized steam] and then incinerated. Equipment that is not disposable is disinfected according to the manufacturer's directions.
Your unit includes windows through which patients can see family members and other visitors with whom they can't have physical contact while they're ill. What kind of windows are these?
[They] are standard glass windows sealed in the normal fashion.
How are doors in the unit sealed?
The doors don't need to be sealed because all airflow goes into the patient room since the rooms are under negative pressure.
How is food for the patients delivered into and taken out of the facility?
Food is carried in on disposable trays. All remains are autoclaved and incinerated.
How exactly is waste disposed of?
We disinfect it to kill any viruses and then flush the material down the toilet.
How do patients bathe and use toilets in the facility — are there special accommodations for these needs?
Patients shower as normal. Waste is handled as [previously described].
Is the unit intended for any serious disease?
The unit is designed to care for patients infected with any pathogen. All of its features are not required to care for a patient infected with the Ebola virus — but it is a convenient location to care for such a patient. | <urn:uuid:fff46e74-5184-44af-9324-a9c7c4895c98> | {
"dump": "CC-MAIN-2017-51",
"url": "http://wcbe.org/post/caring-american-ebola-patients-inside-emorys-isolation-unit",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948511435.4/warc/CC-MAIN-20171210235516-20171211015516-00047.warc.gz",
"language": "en",
"language_score": 0.9539874792098999,
"token_count": 1180,
"score": 3.984375,
"int_score": 4
} |
Nanotech particles can be small enough to get inside body tissues
Attitudes to nanotechnology may be determined by religious and cultural beliefs, suggest researchers writing in the journal Nature Nanotechnology.
They say religious people tend to view nanotechnology in a negative light.
The researchers compared attitudes in Europe and the US and looked at religious and cultural backgrounds.
They say the findings have implications for scientists and politicians making policy decisions to regulate the use of nanotechnology.
The researchers compared attitudes to nanotechnology in 12 European countries and the US.
They then rated each country on a scale of what they called "religiosity" - a measure of how religious each country was.
They found that countries where religious belief was strong, such as Ireland and Italy, tended to be the least accepting of nanotechnology, whereas those where religion was less significant such as Belgium or the Netherlands were more accepting of the technology.
Professor Dietram Scheufele from the Department of Life Sciences Communication at the University of Wisconsin, US, who led the research, said religious belief exerted a strong influence on how people viewed nanotechnology.
"Religion provides a perceptual filter; highly religious people look at information differently, it follows from the way religion provides guidance in people's everyday lives," he said.
The US was found to be the most religious country in the survey, and also the least accepting of nanotechnology.
The researchers say it is understandable that there would be a conflict between religious belief and nanotechnology, especially when looking at what they call "nano-bio-info-cogno" (NBIC) technologies, the potential to create life at a nano-scale without divine intervention.
"It's not that they're concerned about not understanding the science, more that talking openly about constructing life raises a whole host of moral issues," said Professor Scheufele.
Nanotechnology could be used to treat disease at a sub-cellular level
"It is not a study about what religions or believers think about nanotechnology, but about the influence of religiosity on views of nanotechnology. Indeed, what it measures as the national 'religiosity' of different countries seems odd compared with my experience of working with several of the countries on issues of religious belief and technology," said Dr Donald Bruce, a technology consultant.
"A second major concern is what is meant by the term 'nanotechnology'. It has been apparent for several years in public engagement with nanotechnologies that to ask the someone if 'nanotechnology is morally acceptable' is largely meaningless, because 'nano' can be as varied as the technology to which its innovations are applied."
A similar study in the US looked at attitudes to nanotechnology and wider cultural and political beliefs.
People were asked about their views on a range of subjects, including risk from the internet, genetically modified food, nuclear power and mad cow disease.
Broadly, if they thought these were risky, they thought nanotechnology was too.
The researchers say their findings support the idea that underlying cultural beliefs have a stronger influence on opinions formed about nanotechnology than science based information about its potential and pitfalls.
Professor Scheufele says the findings have implications for policymakers trying to regulate nanotechnology.
"How do we regulate something where we have different moral ideas from the public?
"We need to get to grips with the idea that the exact same piece of information can have a different meaning to different people; it's the age-old dilemma for science about what could be done versus what should be done." | <urn:uuid:74999a16-2817-439a-a47b-a05180af72b4> | {
"dump": "CC-MAIN-2016-36",
"url": "http://news.bbc.co.uk/2/hi/science/nature/7767192.stm",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-36/segments/1471982932823.46/warc/CC-MAIN-20160823200852-00033-ip-10-153-172-175.ec2.internal.warc.gz",
"language": "en",
"language_score": 0.9591280221939087,
"token_count": 723,
"score": 3.03125,
"int_score": 3
} |
Expectant moms already have plenty to worry about including keeping up with medical appointments and setting up a nursery. However, one very easy and vitally important thing to do for a healthy baby is to make sure pregnant and nursing women get enough iodine.
Iodine is an essential element in healthy human life, enabling the function of thyroid glands to produce needed hormones for proper metabolism. When children in the womb don’t get enough iodine from their mother, fetal brain development is impaired. During pregnancy, iodine deficiency can cause a child to develop learning disabilities and mental retardation as well as developmental problems affecting his speech, hearing and growth.
“Iodine deficiency disorder (IDD) is the single greatest cause of preventable mental retardation,” says Kul Gautam, the former deputy executive director of UNICEF. “Severe deficiencies cause cretinism, stillbirth and miscarriage. But even mild deficiency can significantly affect the learning ability of populations. Scientific evidence shows alarming effects of IDD. Even a moderate deficiency, especially in pregnant women and infants, lowers their intelligence by 10-15 IQ points.”
Historically, populations got iodine from certain foods, especially seafood, plants grown where soil contains iodine and the meat of animals whose forage grows in such soils. However, weathering and erosion can leach iodine from the soil over time leaving it deficient. Plants and animals raised in areas with iodine-deficient soil will be poor sources of iodine in the human diet and the animals themselves will be less healthy and productive.
To help address iodine deficiency, salt producers in the United States cooperated with public health authorities starting almost a century ago to add iodine to table salt and made both iodized and plain salt available to consumers at the same price. Today, about 70 percent of the table salt sold in the United States is iodized. In fact, salt has been and remains the primary source for iodine in the American diet. The effect of this public health initiative has been to virtually eliminate the incidence of thyroid related illness, including goiters.
Today Americans are consuming less and less iodine. Salt used in processed foods is mostly not iodized and given that people are cooking less at home and buying either restaurant or processed foods, iodine intakes in the U.S. have declined more than 37 percent from about 250 micrograms/day to 157 micrograms/day since the 1970s.
“Pregnant women need to increase their iodine intake,” says Dr. Elizabeth Pearce, associate professor of medicine at Boston University School of Medicine. “Women who are breastfeeding also need higher iodine intake, since iodine is transported into breast milk, where it is important for infant nutrition. Pregnant women need 220 micrograms iodine every day. Breastfeeding mothers need 290 micrograms daily. These levels are higher than the 150 micrograms daily recommended for most adults … pregnant women and women of childbearing age should eat a varied diet rich in iodine-containing foods, such as fish and milk, and should choose iodized salt over non-iodized salt.”
Medical professionals including the American Academy of Pediatrics and The American Association of Clinical Endocrinologists (AACE) have also started to recommend iodine supplements for women of childbearing age particularly if they are pregnant or breast feeding. The American Academy of Pediatrics additionally warned that iodine deficiency for pregnant women or nursing mothers makes mother and child more vulnerable to some pollutants found in the environment such as nitrates, thiocyanates and perchlorates.
Iodized salt has been one of the greatest and most economical public health successes and it continues to help raise healthy, smart children. | <urn:uuid:a5e6df8a-4965-41d2-af96-d4ce1db9ced9> | {
"dump": "CC-MAIN-2018-22",
"url": "http://www.saltinstitute.org/2014/11/03/expectant-moms-need-iodine-for-healthy-children/",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794865468.19/warc/CC-MAIN-20180523082914-20180523102914-00075.warc.gz",
"language": "en",
"language_score": 0.9559201002120972,
"token_count": 751,
"score": 3.1875,
"int_score": 3
} |
Visual communications or visual communications design is a creative process that combines the visual arts and technology to communicate ideas. It begins with a message that, in the hands of a talented designer, is transformed into visual communication that transcends mere words and pictures. By controlling color, type, movement, symbols, and images, the visual communication designer creates and manages the production of visuals designed to inform, educate, persuade, and even entertain a specific audience.
The two terms are often interchangeable. However, visual communication is a broader term to define the kind of communication people receive through the reading or seeing. It is a good combination of words, pictures, photography, symbols, and signs. People are using these tools to promote their business and to express their view in a unique and beautiful manner. Enter the design feature of visual communication to accommodate our current world of bombardment by visual stimuli.
The brain processes an image 60,000 times faster than text.
In a visual communications associate’s degree program, you will be given the opportunity to master creative design while learning how to utilize computer technology in the world of business. Your education will teach you how to create visual productions for advertising and business purposes. You learn to visualize a concept and then produce it to sell a certain product or idea. This is done through the use of most up-to-date technological and computer skills available. You will likely learn about all forms of visual communication, including print materials, Web design, and film.
You may have the convenience of attending a local community college to earn an Associate of Fine Arts in Visual Communication. Here are samples of courses in a typical program of this type:
Visual Design: This provides an introduction to the concepts and processes of graphics and media design. Students learn about the field of design and work with computers in bitmap, vector, and multimedia software as well as with traditional art and design media.
Visual Design for the Web: This course covers the concepts and techniques of art making for the internet and other interactive media environments. Students will create original websites with attention to design fundamentals such as color, typography, imagery, and composition.
At the baccalaureate level, the degree names vary from college to college. There are Bachelor of Fine Arts (BFA) and Bachelor of Arts programs. Some schools offer both degrees. This study plan might require 7 hours of coursework (or 22 courses) in the major area while the Bachelor of Arts (BA) degree requires 36 (or 12 courses). Generally, the BFA affords a larger portfolio due to considerably more design studio experiences. The BA is more suited to students who wish to double major or desire to create a custom experience combining visual communication design with art, art history, marketing, psychology, athletics, and more.
A BFA degree will develop your skills as a designer more than an Associate degree. For example, through assignments, students will work digitally to explore color, form, composition, texture, and typography. Students will gain a fluency in typography and its systematic application to traditional and modern media. The degree might also include the effective use of motion graphics through sketching, storyboarding, kinetic type, animation, narration, and soundtracks. Media delivery may include digital signage, web, broadcast, and other public venues such as a planetarium.
A bachelor’s degree will open up more employment possibilities in Graphic Design, Graphic Art, Web and Multimedia Design, Packaging Design, Marketing Communications, Art Direction, Branding Design, Design Education, and also Independent Graphic Design Consulting and Operations.
A graduate program offers diverse areas to expand your skills in the realm of visual communication. For example, specialties are available in the arena of advertising and marketing. In addition, there are several online programs.
A Master’s degree in Professional Studies in Design Management & Communications exposes you to the fields of design, communications, marketing, and related areas. You will gain expertise across a wide range of specialties, including creative strategy, design leadership, digital and traditional marketing communications, social media, management, and branding.
You may also consider a graduate degree in Visual Communication Design (MVCD). These can be studio-based 2-year programs designed to provide specialized studio design opportunities for those students who have an undergraduate degree in visual communication or graphic design. An alternative is a three-year first professional degree for students who do not have an undergraduate degree in visual communication or graphic design.
Another option at this level is a Master of Fine Arts with a major in Communication Design. This coursework will suit students who wish to study areas of corporate advertising art direction, graphic design, and digital media design. The curriculum includes the exploration and experimental use of the written word integrated with visual forms by using digital and traditional photographic, illustrated, and graphic media.
You will learn about the communication through marketing materials. This entails studying the development of typographic elements, layout grid constructs, photo imagery, and illustration for publication of corporate marketing materials. Your knowledge of marketing extends into the role of sustainable package design, if that meets the requirements of clients and consumers in the global marketplace. | <urn:uuid:66591211-eff5-4711-b00a-f69db61598b7> | {
"dump": "CC-MAIN-2019-26",
"url": "https://www.degreequery.com/what-are-my-degree-choices-to-work-in-visual-communications/",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627999298.86/warc/CC-MAIN-20190624084256-20190624110256-00077.warc.gz",
"language": "en",
"language_score": 0.9216948747634888,
"token_count": 1043,
"score": 3.3125,
"int_score": 3
} |
March 21 (UPI) -- As alpine permafrost thaws, new sources of decaying organic matter become available to CO2-emitting microbes. Climate scientists and their models may be underestimating this stealthy source of carbon dioxide, according to a new study.
In a paper published Thursday in the journal Nature Communications, scientists presented evidence that Colorado's Front Range tundra emits more CO2 than it absorbs each year, making it a net carbon contributor -- potentially worsening the impacts of climate change.
Previous studies have suggested melting Arctic tundra is releasing CO2 that has been sequestered in the frozen soil for centuries.
"We wondered if the same thing could be happening in alpine terrain," lead researcher John Knowles said in a news release. "This study is a strong indication that that is indeed the case."
Now a scientist at the Institute of Arctic and Alpine Research and a researcher at the University of Arizona, Knowles conducted the study as a geography doctoral student at the University of Colorado, Boulder.
The carbon sequestering services of forests are well documented. Trees and other types of vegetation absorb CO2 via photosynthesis. When their leaves and branches fall to the ground and decay, the organic matter is broken down by microbes, releasing CO2 back into the air. But much of the carbon absorbed by trees is stored in the tree's root system and the surrounding soil -- more than is released by munching microbes -- allowing many forest ecosystems to serve as a carbon reservoir.
According to the latest study, tundra and melting permafrost feature a slightly different balancing act -- one that is less friendly to the warming climate.
When previously unavailable carbon-rich organic matter becomes available to hungry microbes, the ecosystem's greenhouse gas emissions increase. To quantify this dynamic, Knowles and his research partners measured surface-to-air CO2 transfer rates between 2008 and 2014 at Colorado's Niwot Ridge Long Term Ecological Research site.
Across the tundra landscape in Colorado's Front Range, scientists confirmed more carbon is emitted than absorbed over the course of each year. They also measured the release of old carbon during the middle of the winter. The discovery suggests scientists have underestimated year-round microbial activity.
"Microbes need it to be not too cold and not too dry, they need liquid water," said Knowles. "The surprise here is that we show winter microbial activity persisting in permafrost areas that don't collect much insulating snowpack due to wind stripping it away."
Alpine forests are likely to remain carbon sinks, but fields of treeless tundra may continue to release greater levels of greenhouse gas as the climate warms.
"Until now, little was known about how alpine tundra behaved with regard to this balance, and especially how it could continue emitting CO2 year after year," Knowles said. "But now, we have evidence that climate change or another disturbance may be liberating decades-to-centuries-old carbon from this landscape." | <urn:uuid:ce796414-32e4-42b6-ad2a-352a1647fd12> | {
"dump": "CC-MAIN-2021-25",
"url": "https://www.upi.com/Science_News/2019/03/21/Study-Thawing-alpine-permafrost-a-stealth-source-of-CO2/7061553171063/",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488257796.77/warc/CC-MAIN-20210620205203-20210620235203-00243.warc.gz",
"language": "en",
"language_score": 0.9451799392700195,
"token_count": 616,
"score": 3.640625,
"int_score": 4
} |
28/01/20 - Year 11 History Commemorate Holocaust Memorial Day – 27 January 2020
On the evening of Monday 27 January our Year 11 History pupils had the huge honour and privilege of being part of N Ireland’s annual Holocaust Memorial Day Commemoration Service in Belfast City Hall. Our pupils travelled together with pupils and staff fromthe Royal School Armagh, Banbridge High School and New-Bridge Integrated College. Two of our pupils, Jodie Truesdale and Lucy Gray took part in the service that marked the 75th anniversary of the liberation of the former Nazi concentrationand extermination camp, Auschwitz-Birkenau and the 25th anniversary of the genocide in Bosnia. Holocaust survivor, Tomi Reichental was the powerful and very moving keynote speaker at this poignant event, who along with his family was held in the Bergen-Belsenconcentration camp from 1944 – 1945, until it was finally liberated by British troops. Our thanks go to Mr Megaw and Mrs Reid who accompanied our pupils to this very special and memorable event.
A recording of the HMD 2020 Commemoration Service will be able to view online through the Holocaust Memorial Day website:
Thanks to Mr Megaw and Mrs Reid who accompanied our pupils to this very memorable event. | <urn:uuid:c5afeac1-34b6-4e95-bf99-e50d496b32a9> | {
"dump": "CC-MAIN-2020-16",
"url": "https://newtownhamiltonhigh.co.uk/january-2020/",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371805747.72/warc/CC-MAIN-20200407183818-20200407214318-00459.warc.gz",
"language": "en",
"language_score": 0.9589914679527283,
"token_count": 259,
"score": 2.9375,
"int_score": 3
} |
Through DISCOVER, Marymount University students learn how to learn
– by asking questions, probing beyond the surface, and working one-on-one with Marymount’s scholarly faculty in a variety of disciplines. It’s called inquiry-based learning, and it’s a fundamental facet of Marymount’s DISCOVER program, which fosters and supports research and creative activities in all academic programs at the University.
is the First-Year Experience course that introduces new first-year students to Marymount University and to learning in higher education.
Every major also has three required courses that have been designed to promote inquiry-guided learning
. Dr. Virginia Lee, an expert in student learning, defines inquiry-guided learning as “an array of classroom practices that promote student learning through guided and, increasingly, independent investigation of complex questions and problems, often for which there is no single answer. Rather than teaching the results of others’ investigations, which students learn passively, instructors assist students in mastering and learning through the process of active investigation itself.”
In these inquiry courses
, you will be engaged through a variety of teaching methods, including simulations/games, field trips, problems, case studies, projects, logs/journals, other writing assignments, debates/panels, and discussions. | <urn:uuid:df2ea2a0-5003-427d-acc3-2c459c0a6be9> | {
"dump": "CC-MAIN-2014-49",
"url": "http://www.marymount.edu/academics/discover/inquiry.aspx",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-49/segments/1416400380355.69/warc/CC-MAIN-20141119123300-00231-ip-10-235-23-156.ec2.internal.warc.gz",
"language": "en",
"language_score": 0.955244243144989,
"token_count": 271,
"score": 3.34375,
"int_score": 3
} |
2153. Bagh-e Fin
Bagh-e Fin, Kashan, Iran.
The six and a half acre garden in Fin, a suburb outside Kashan, captures the Soleimaniyeh spring and directs it into a geometric layout of watercourses and pools, framing various small buildings and garden plots. Although a garden was in place much earlier, the standing buildings are from the Safavid and Qajar periods (16th century). Safavid constructions include the exterior wall and monumental entrance portal, the central pavilion, and a small bathhouse - famed as the site of Amir Kabir's murder. A larger bathhouse and a library were built during the Qajar period. | <urn:uuid:36a93ac8-be0f-44c1-b7ab-40a4f9ef6da3> | {
"dump": "CC-MAIN-2018-13",
"url": "https://www.flickr.com/photos/ensiematthias/2189171221/in/set-72157603706689403/",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257646189.21/warc/CC-MAIN-20180319003616-20180319023616-00289.warc.gz",
"language": "en",
"language_score": 0.9494407773017883,
"token_count": 145,
"score": 2.640625,
"int_score": 3
} |
Above Saturn’s north pole, clouds swirl in a distinct and stunning hexagonal shape. Discovered by NASA’s Voyager mission in 1981, Saturn’s hexagon is striking to behold, and one new study suggests that this six-sided vortex may actually be hundreds of kilometers tall.
After the Voyager mission pushed human exploration far out into the solar system and, subsequently, discovered Saturn’s hexagon whirling at a low altitude, the Cassini spacecraft returned to the ringed planet in 2004 and continued these observations. The spacecraft even spotted a high-altitude vortex at the planet’s south pole, but this vortex was not hexagonal.
Now, as part of a new study using Cassini data, researchers have discovered, for the first time, a high-altitude vortex forming at Saturn’s north pole. This vortex was spotted as the planet’s northern hemisphere approached summertime. And it has a hexagonal shape like the famous hexagon originally discovered closer to the planet’s surface. These findings suggest that the high-altitude vortex may be influenced by the low-altitude vortex, potentially forming an immense, tall tower, according to a statement.
Leigh Fletcher of the University of Leicester, UK, the lead author of this new study, described Saturn’s hexagon in an email as “a meandering jet stream” with a “hexagonal, six-sided appearance when viewed from over the pole.”
“The hexagon is just a current of air, and weather features,” Andrew Ingersoll, of the Cassini Imaging Team, said about the structure, according to a NASA statement.
While we’ve known about Saturn’s hexagon since 1981, this discovery of a hexagonal vortex at a higher altitude was a shock to the team. Fletcher said that “the presence of the hexagon, hundreds of kilometres above the clouds, was a total surprise.” The team didn’t expect to find an almost mirror image of Saturn’s famous hexagon shape farther up in the clouds.
Unfortunately, Saturn’s seasons last for a long time. “One Saturnian year spans roughly 30 Earth years, so the winters are long,” co-author Sandrine Guerlet from Laboratoire de Météorologie Dynamique, France added in the statement. So, these “seasonal vortexes,” or vortexes associated with seasons like Saturn’s summer, can’t be observed for long stretches.
That means Cassini couldn’t see what was happening in the north pole’s high altitudes for many years because it was simply too cold to make observations.
However, Fletcher said that it’s not clear if the vortex is always there but is just too cold to observe, or if the vortex only appears in warmer seasons. However, thanks to Cassini’s remarkable lifespan, the craft was able to watch the planet long enough to gather invaluable data on Saturn’s incredible vortices.
Still, a number of questions remain. “How did the hexagon come to be, how has it been stable for so long, and is it connected in any way to the deeper interior of Saturn?” Fletcher said. It’s also unclear how similar the northern and southern vortices are, as only one pole seems to have hexagonal vortices, according to the statement. But Fletcher said that despite outstanding questions, this work is “an important constraint on all our future models of this fascinating structure.”
This new study was published September 3, 2018 in the journal Nature Communications. | <urn:uuid:3f6f0b15-3d6a-46a6-8081-fe79474a6aa8> | {
"dump": "CC-MAIN-2022-33",
"url": "http://blog.vishaysingh.com/saturn039s-hexagon-could-be-an-enormous-tower/",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573699.52/warc/CC-MAIN-20220819131019-20220819161019-00593.warc.gz",
"language": "en",
"language_score": 0.9312865138053894,
"token_count": 770,
"score": 3.890625,
"int_score": 4
} |
Jesse Bear, What Will You Wear? by Nancy White Carlstrom
Summary: Follow a young bear through a typical day as he describes all the things he “wears”.
Story Treasure: Flannel bear with 4 matching shirt/short outfits
Before beginning centers we looked through the book again, this time using the “Can You Find” Search on pg 9 of Before Five in a Row.
I brought in 8 pairs of matching fabric swatches: fleece, flannel, felt, silk, suede, denim, cotton, and sheer. I put one of each in a bag and gave each child one of the remaining 8 pieces. As I walked around the table, each student reached in the bag (without looking) and felt around for the matching fabric swatch. We talked about how the fabrics felt: scratchy, soft, thick, slippery, etc.
I brought in this bear puzzle and let the students take turns dressing the bear. Most did a great job of matching the outfits, others had fun being creative. I didn’t correct any of the students, we just commented on the outfits and let the students explain why they chose the clothing they did.
I brought in a shape sorter and the students took turns fitting the pieces into the appropriate openings. It was a favorite activity. We also pointed out the shapes in the kitchen tile.
MATH: Colors, Counting, Sorting
I brought in these counting bears and we spent awhile sorting them into colored bowls, counting them and naming the colors. | <urn:uuid:138f20eb-54d4-48cf-a2bd-f4ed64ef4c44> | {
"dump": "CC-MAIN-2020-24",
"url": "http://parentingwithcrunch.com/posts/toddler-co-op-class-jesse-bear-what-will-you-wear/?utm_source=rss&utm_medium=rss&utm_campaign=toddler-co-op-class-jesse-bear-what-will-you-wear",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347388427.15/warc/CC-MAIN-20200525095005-20200525125005-00368.warc.gz",
"language": "en",
"language_score": 0.9581530690193176,
"token_count": 319,
"score": 3.09375,
"int_score": 3
} |
Soccer may be the perfect sport for children. To participate, the child does not have to be big, tall or, strong or even fast. It is also a sport open to both genders. Although the aforementioned physical attributes are a plus, the reality is that at certain age ranges there are no great differences in muscular development, coordination or strength. .Hence, you will see boys and girls playing on the same team an nearly equal level. This is one of the reasons why in youth soccer (particularly at the youngest ages), the winning team is oftimes the luckiest, as opposed to the most technically skilled. For the purpose of this article, we are not referring to those youth leagues or clubs who are preparing it's kids for international competitions, but rather local school teams and community based leagues which exist primarily to offer kids an opportunity to learn a new sport, meet new friends and just have fun.
Then there is the financial factor. Unlike other sports such as hockey or baseball where the equipment is often quite costly, soccer is a sport where with the exception of a few pads and gloves (for the goalie) there is no significant cost factor. In some places it is not uncommon to see kids kicking around rolled clothing, cans etc. In addition, the playing field can be anywhere a little open space can be found.
The third factor making soccer popular particularly amongst parents is the
injury factor, or rather the relatively small number of injuries. Of course
there will be bruised shins and knees as there are in most sports; but the
most serious injuries are relatively non existent in soccer particularly
in the absence of sliding tackles practiced and encouraged adult level.
But even in the adult leagues,. certain techniques such as the rear
tackle are being outlawed to reduce career shortening injuries.
Overall, soccer is a game where everyone can learn and have fun. It provides an opportunity for boys and girls at the earliest ages to improve coordination, balance, and cardiovascular conditioning.
The psychological and emotional benefits cannot be overlooked The
child who experiences defeat during play, can learn resilience and persistence;
attriibutes which the individual can carry throughout his/her life.
Thus for the above reasons particularly financial, soccer is played by more youths than any other sport in the world. As you are aware, in Eastern, Middle Eastern and European countries, soccer or "football "as it is called, is the national sport. Even in America ,where the "bat and ball" are king, it has been estimated that there are more youths playing soccer than enrolled in little league baseball and youth football combined.
During the course of a series of articles we will introduce you to youth soccer organizations here in America, and in future editions examine the sport as it is organized and praticed in other countries. An overall view of soccer in American schools will be examined, from the grammar school level through college. We will alsoi provide links to webpages giving you the opportunity to visit the school teams, summer leagues and camps through their webpages. We also plan to expand this site, to include articles by coaches, physicians, parents and the players themselves. This way all persons involved will be able to share valuable information from their unique perspectives.
For you coaches, physicians and league organizers, and parents, we invite you to communicate with us and contribute your views and recommendations for upcoming articles. For the youth who may be reading this article, we strongly encourage you to write us and provide us with your opinions and ideas for future articles. You might even want to just send us a team photo and information about your team to be published in our "Team Shots" section.
However, for now we invite you to join us in a look at one of America's leading soccer organizations specifically the American Youth Soccer Organization (AYSO). This organization headed by Exec. Dir. Dick Wilson, having more than 600,000 registered members, is one of the largest youth soccer organization in America. In our article, Director Wilson tells us about the organization's purpose and goals. We have also provided a link to AYSO's main website and a links section for AYSO clubs nationwide.
PRESS TO CONTINUE
Search Barnes&Noble for Books on Youth Soccer | <urn:uuid:58447003-7e23-434c-aa1c-7d057a9cf426> | {
"dump": "CC-MAIN-2016-40",
"url": "http://www.lacancha.com/YouthSoc.html",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738659865.46/warc/CC-MAIN-20160924173739-00148-ip-10-143-35-109.ec2.internal.warc.gz",
"language": "en",
"language_score": 0.9618187546730042,
"token_count": 862,
"score": 2.609375,
"int_score": 3
} |
Reading Development in Chinese Children
Catherine McBride-Chang, Hsuan-Chih Chen
ABC-CLIO, Dec 30, 2003 - Education - 248 pages
This text reviews both similarities and unique cultural, linguistic, and script differences of Chinese relative to alphabetic reading, and even across Chinese regions. Chinese reading acquisition relies upon children's strongly developing analytic skills, as highlighted here. These 16 chapters present state-of-the-art research on diverse aspects of Chinese children's reading development.
This edited volume presents research on Chinese children's reading development across Chinese societies. Authors from China, Hong Kong, Singapore, and Taiwan, among others, present the latest findings on how Chinese children learn to read. Reading acquisition in Chinese involves some parameters typically not encountered in some other orthographies, such as English. For example, Chinese readers in different regions might speak different, mutually unintelligible languages, be taught to read with or without the aid of a phonetic coding system, and learn different scripts. This book both implicitly and explicitly considers these and other contextual issues in relation to developmental and cognitive factors involved in Chinese literacy acquisition.
One of the clearest themes to emerge from this volume is that, across regions, Chinese children, despite lack of explicit teaching of phonetic or semantic character components, learn to read largely by integrating visible print-sound and print-meaning connections. Rather than learning to read Chinese characters by rote, as is sometimes mistakenly believed, these children are analytic learners. Chapters in this book also cover such topics as Chinese children's reading comprehension, cognitive characteristics of good and poor readers, and reading strategies of bilingual and biscriptal readers. This book is a useful reference for anyone interested in understanding either developing or skilled reading of Chinese or for those interested in literacy learning across cultures. | <urn:uuid:99d21907-a4ca-4b6b-9c4f-622fdccd7f22> | {
"dump": "CC-MAIN-2017-22",
"url": "https://books.google.com/books/about/Reading_Development_in_Chinese_Children.html?id=796b8-kV5GQC&hl=en",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463607647.16/warc/CC-MAIN-20170523143045-20170523163045-00086.warc.gz",
"language": "en",
"language_score": 0.9307297468185425,
"token_count": 366,
"score": 3.453125,
"int_score": 3
} |
The spies who helped win the Revolutionary War
"I only regret that I have but one life to lose for my country."
So wrote 21-year old Nathan Hale before being hanged for espionage by the British on Sept. 22, 1776. Hale had originally been encouraged to join the revolution by an old Yale classmate, Benjamin Tallmadge.
Tallmadge and Hale had been close during their time at Yale and often exchanged letters. Three years after their graduation, Tallmadge wrote to Hale, newly an officer in the American forces, saying, "Was I in your condition, I think the more extensive service would be my choice. Our holy Religion, the honor of our God, a glorious country and a happy constitution is what we have to defend."
Hale agreed with Tallmadge's sentiment and soon accepted an assignment to do more than just fight–he would spy from behind enemy lines. Although Hale's venture into espionage ended rather poorly, Tallmadge's revolutionary feelings did not subside. Soon, he would find himself at the center of the American Revolution's most important spy ring.
The Culper Ring, founded and supervised by Tallmadge, operated from late October in 1778 until the British evacuated New York in 1783. Although the ring was active for all five of these years, its most productive period was between 1778 and 1781.
Benjamin Tallmadge with his son, William.
After Tallmadge brought the ring together, it was led by Abraham Woodhull and Robert Townsend, codenamed "Samuel Culper, Sr." and "Samuel Culper, Jr." respectively. The codename "Culper" came straight from George Washington himself, a slight alteration of Culpeper County, Virginia where Washington had worked as a surveyor in his youth.
The ring was highly sophisticated, using methods still familiar today. Couriers, invisible ink. and dead drops were the norm. Some messages were hidden in plain sight, coded within newspaper advertisements and personal messages. Supposedly, one woman, Anna Strong, was even able to use the clothes she hung to dry to send messages to other members of the ring. Codes and ciphers were standard practice. These methods enabled agents to send Tallmadge apparently innocent letters. Tallmadge could pick out individual words to decode messages.
While Woodhull and Townsend ran the show, many agents, couriers, and sub-agents were also involved. Caleb Brewster, Austin Roe, Anna Strong and the still-unidentified 'Agent 355' all played vital roles. Other members included Hercules Mulligan and his slave Cato. Mulligan warned in January, 1779 of British plans to kidnap or kill senior American leaders including Washington himself. Cato delivered the vital message.
Other agents included Joseph Lawrence, Nathan Woodhull (Abraham's cousin), Nathaniel Ruggles, William Robinson and James Rivington. So solid was the ring's security that its very existence remained unconfirmed until the 20th century. Even Washington himself couldn't identify every Culper agent. Its strict security preserved both the ring and the lives of individual members, boosting their confidence in themselves and each other.
The Culper Ring's successes, what spies call coups, were many. They warned of a surprise attack on newly arrived French troops at Newport, Rhode Island. The forces, properly warned, were able to foil British plans to devastate their men while they recovered from their transatlantic voyage. The Culper spies uncovered British plans to destroy America's nascent economy by forging huge amount of Continental dollars. Continental dollars were soon withdrawn from circulation, replaced with coins by 1783.
Without the Culper Ring, Washington may have fallen for a raiding operation meant to divide his forces. In 1779, General William Tryon raided three main ports of Connecticut, destroying homes, goods in storage, and a number of public buildings. Tryon was attempting to split off a portion of Washington's forces to allow British forces to rout the Americans.
Washington did not ride out to meet Tryon. Instead, Tryon's forces rampaged through civilian land and the general was criticized by both American rebels and those who supported the British as barbarous.
By far the Culper Ring's most important coup was exposing General Benedict Arnold. Arnold, whose name has entered the American language as a metonym for treachery, was in contact with British spy Major John André and planned to surrender West Point to the British. The Culper Ring warned Tallmadge of a high-ranking American traitor, but lacked his identity. Tallmadge identified Arnold when André was captured and later hanged for his treason. Although Arnold escaped with his life, West Point remained safe from the British.
Benedict Arnold in 1776
Abraham Woodhull's sister Mary is sometimes credited with exposing Major André and thus Benedict Arnold. André (alias John Anderson) fled when he realized he was under suspicion. Unlike the Culper Ring's, André's security was lax. That cost André his life, Arnold his reputation, and ultimately helped cost the British Empire its American colony.
Stopped by three soldiers, André first tried to bribe them to let him go. Instead of taking the bribe, the soldier, now actively suspicious rather than idly curious, searched him and found incriminating papers. The letters proved conclusively that André was a British spy. The information contained in André's letters was almost useless to the British; their commander General Clinton already had it. They were, however, extremely valuable to Tallmadge.
André's captured messages were in Benedict Arnold's handwriting, making it suddenly clear who was leaking high-level information. Arnold fled for his life, going to England, then Canada. After alienating a number of business partners in New Brunswick, Arnold returned to England. André was not so lucky to escape the American forces–he would make a useful reprisal for the hanging of Tallmadge's dear friend, Nathan Hale. Caught dead to rights by the Culper Ring, André would soon be dead, period.
Hale had been hanged on Sept. 22, 1776 at the tender age of 21. He died bravely, with composure, courage and dignity. André faced the gallows equally bravely on Oct. 2, 1780. Before his death he received a visitor: Colonel Tallmadge.
The two spent part of their time together talking. At one point André asked Tallmadge whether his capture and Hale's were similar. Tallmadge, remembering his dead friend and perhaps feeling guilty at encouraging him to take a more active revolutionary role, replied, "Yes, precisely similar, and similar shall be your fate…".
The British evacuated New York in mid-August, 1783. On Nov. 16 of the same year, Washington himself visited to mark the seventh anniversary of the American retreat from Manhattan. While there he met someone to whom he and his new nation owed a personal and national debt: Culper agent Hercules Mulligan.
This article originally appeared on Explore The Archive. Follow @explore_archive on Twitter. | <urn:uuid:1f7a3b87-769d-4f30-be98-3b4ccfa438fb> | {
"dump": "CC-MAIN-2023-40",
"url": "https://www.wearethemighty.com/mighty-history/spies-helped-win-revolutionary-war/",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233506399.24/warc/CC-MAIN-20230922102329-20230922132329-00814.warc.gz",
"language": "en",
"language_score": 0.9745103120803833,
"token_count": 1469,
"score": 3.265625,
"int_score": 3
} |
Photograph by Jeff Smith
Photograph by Jeff Smith
Published February 15, 2011
Josephine Adzrolo sat on a stool in front of her mud-brick home, stirring banku, a fermented paste of corn and cassava served with soup or okra stew. She heated the traditional mixture using a typical cooking fuel—charcoal—an energy source linked to serious global health risk.
But with her family waiting for lunch, Adzrolo cooked outdoors using a stove specially designed with a ceramic liner to retain heat. Although the scrap-metal exterior gave it a rough-hewn look, the cookstove was rated 40 percent more energy efficient than the traditional stoves used in the area.
For Adzrolo, the most obvious advantage was a practical one. "It saves a lot of charcoal," she said. "I can cook plenty of banku and soup."
Toyola Energy, the five-year-old Ghana business that made the stove, is aiming for far-reaching benefits as well. By using heat-conserving equipment outdoors, instead of more traditional cookstoves indoors, Adzrolo and others can avoid the high levels of toxic cooking smoke that have ravaged people's health throughout the developing world.
Half the world's population—3 billion people—cook with wood, charcoal, dung, coal or agricultural residues on simple traditional stoves or open fires. Breathing the smoke from those stoves causes a stunning variety of acute and chronic illnesses—pneumonia, emphysema, cataracts, lung cancer, bronchitis, cardiovascular disease, and low birth weight—all contributing to an estimated 1.9 million premature deaths every year—more than double the global death toll of malaria, according to World Health Organization statistics. Indeed, the WHO estimates harmful cookstove smoke to be the fourth worst overall health risk in developing countries.
More efficient cookstoves are the accepted solution, but it has been difficult to introduce them widely. However, world health and environmental activists believe that thanks to efforts of businesses like Toyola and others, the world may be within reach of a "tipping point" that could lead to mass adoption of clean cookstoves worldwide. Part of the impetus is that this is a health solution that can also help to clean up the atmosphere, a fact that has mobilized finance from the carbon markets that have developed under the United Nations' initiatives to address climate change.
Even if the Kyoto climate change agreement expires in 2012, such carbon financing is expected to continue voluntarily because of public and political pressure to offset climate change impacts.
Last September, the problem of unsafe cookstoves gained broader attention at the opening of the United Nations' General Assembly, when a new public-private partnership, the Global Alliance for Clean Cookstoves, was launched with the help of a five-year, $50 million commitment from the United States government.
(Related: "The Solvable Problem of Energy Poverty")
The nonprofit United Nations Foundation, which launched the effort, also gained backing from the UN itself, from the governments of Denmark, Germany, Norway, and Peru; the global energy company Shell* and its Shell Foundation; investment bank Morgan Stanley; and the nonprofit SNV-Netherlands Development Organisation.
The alliance's goal is to help 100 million homes adopt clean and efficient cookstoves by 2020. Its effort has even been featured on the Martha Stewart show.
The alliance notes that women and chidren, who breathe the smoke indoors, bear most of the health risk from the unsafe cooking techniques, as well as the brunt of the long labor spent collecting fuel.
In addition to creating an immediate human health risk, inefficient stoves are estimated to contribute 2.5 to 10 percent of current climate change through the emissions of black carbon or soot, according to research supported by the U.N. Environment Programme. There is a bright side to this dark problem, though. Because the soot stays in the atmosphere for just a few days to a couple of weeks, efficient cookstoves are viewed as a relatively quick way to reduce greenhouse gas emissions.
There is a wide variety of efficient cookstove choices, according to the alliance. Higher-performing stoves that can achieve 95 percent reduction in emissions can sell for about $100. But there is evidence of health benefits even from lesser emissions reductions. (See related blog: "Seeking to Improve Human and Ecological Health Together") And in Ghana, Toyola has been able to make inroads with its 40-percent-more-efficient models, sold for prices as low as $7. Toyola sold roughly 140,000 cookstoves to households and "chop bars," or local restaurants, since 2006, including 51,000 stoves last year.
One reason for Toyola's success is that it has been able to parlay the issue of climate change into carbon revenues. In September 2009, it became just the second cookstove project in the world to be registered by the Swiss-based nonprofit, the Gold Standard Foundation, a high-quality carbon credit certification.
Leslie Cordes, the clean cookstove alliance's interim executive director, met Toyola co-founder Suraj Wahab at the alliance's launching in New York in September. She said she was impressed by Wahab and his interest in working with the alliance to make the cleanest stoves possible.
While the alliance hasn't tested the Toyola stove and therefore can't comment on it specifically, she described Suraj as "part of a new breed of entrepreneurs looking to scale up production." And although a stove that generates carbon credits doesn't necessarily mean that it is highly efficient in reducing all the particulates that might affect one's health, "Toyola's work is consistent with the alliance's efforts to continuously improve the cleanliness and efficiency of cookstoves around the world," Cordes says.
Toyola's main office and production plant is among a hovel of mud-brick huts about 10 miles outside Accra, Ghana's largest city.
On a recent day, Wahab was helping to unload scrap metal off a small pickup truck, and reloading the truck with 60 freshly painted black cookstoves ready to sell.
Inside one of the modest buildings, workers were assembling and painting the cookstoves. One could hear a constant hammering of scrap metal pieces being flattened. The smell of paint permeated the gritty air. Dozens of cookstoves were stacked in a corner, set to be painted.
Toyola, which has more than 200 workers, has a decentralized operation. It makes stoves in five locations in Ghana and one in nearby Togo; its "stores" are trucks that deliver cookstoves. Wahab calls his salespeople, who earn a 10 percent commission, "evangelists," because "when we started the business, people didn't believe in the cookstove and what it could do."
Even now, Toyola often sells stoves on credit, adding a couple dollars to the price to incorporate interest. Wahab said the company stimulates repayment by encouraging customers to put their charcoal cost savings into a collection tin dubbed the "Toyola box."
Toyola grew out of the entrepreneurial efforts of Wahab and Ernest Kyei, who had participated in a project funded by the U.S. Agency for International Development to train cookstove artisans.
The Nigerian-born Wahab, who has lived in Ghana 13 years, said he couldn't get local banks interested in investing in the business. He said they wondered why two educated people—Wahab is an accountant by background and Kyei an engineer—wanted to get involved in such a dirty business.
"I didn't see coal pots [stoves], but a market of four million [Ghanaians] no one was selling to," Wahab said.
U.S.-based E+Co, which calls itself a "nonprofit impact investor," focusing on clean energy projects in developing countries, saw the potential for Toyola to generate carbon credits to subsidize the more costly stoves and to finance growth. It invested a total of $270,000 in Toyola and helped it through the two-year, nearly $200,000 process to gain carbon finance certification.
Erik Wurster, E+Co's carbon finance manager, said the lengthy process included surveying 125 typical users, and conducting independent "kitchen performance tests" to precisely measure how fuel use changed and declined once a household started using a Toyola stove. A climate change auditor licensed by the United Nations was required to do a sample to verify E+Co's findings before Toyola received Gold Standard registration.
Independent annual audits are done to verify stove sales and the carbon credits. Each cookstove has an identification number to trace when it was produced and sold. "It's a very rigorous test, but it has to be because a lot of money is at stake and the buyer has to know what it's getting," Wurster said.
Toyola thus far has received revenues from the sale of 51,230 tons of carbon credits for cookstove use between August 31, 2007 and September 8, 2009. That's roughly equivalent to the carbon dioxide emissions of 10,000 Toyota Camrys, driven 12,000 miles each.
Goldman Sachs bought the credits, indicating how the carbon trade has become a mainstream investment.
Wurster said the Goldman Sachs purchase was confidential, but did say each cookstove generates about $20 worth of credits over its estimated five-year life span. He said the beauty of carbon finance is that it "snowballs"—stoves sold in previous years continue to accumulate credits as long as they are still in operation.
A carbon credit of up to $20 per stove more than offsets the higher production costs of building stoves with ceramic liners, and provides funds for expansion. But there is a lag in getting the carbon financing, and Toyola also faces payments on its debt to E+Co at 10-11 percent annual interest rates.
Wurster said he believes the carbon finance will continue voluntarily. "This is driven mostly by consumer sentiment, which we don't see drying up just because the Kyoto Protocol might expire," he said. "We have an agreement with Goldman through 2012, and are now discussing with buyers to commit through 2016."
In the meantime, the ambitious Wahab is itchy to grow faster and expand into neighboring countries.
He said he can only do so much with manual labor, and would like additional money to invest in mass production. "If I had enough money, I could do 100,000 stoves a year," he said.
National Geographic's Great Energy Challenge initiative is sponsored by Shell. National Geographic retains editorial automomy.
The United States has deported tens of thousands of Mexicans who crossed the border as children, and many now struggle on the streets of Tijuana in a country they hardly know.
Latest From Nat Geo
It's all hands (and paws) on deck when it comes to the poaching crisis in Africa.
For Sebastián García Iglesias, the ghosts of his ancestors are stitched to the tapestry of the land they pioneered.
In this new series, writers and photographers from around the world reflect on places that hold special meaning for them.
The Future of Food Series
Food. It's driven nearly everything we've ever done as a species, and yet it's one of the most overlooked aspects of human history.
We've made our magazine's best stories about the future of food available in a free iPad app. | <urn:uuid:a2abba55-b6e8-4311-ba22-4cd372697641> | {
"dump": "CC-MAIN-2014-49",
"url": "http://news.nationalgeographic.com/news/energy/2011/02/110215-cookstoves-sustainable-development-ghana/",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-49/segments/1416400380037.17/warc/CC-MAIN-20141119123300-00090-ip-10-235-23-156.ec2.internal.warc.gz",
"language": "en",
"language_score": 0.964027464389801,
"token_count": 2384,
"score": 3.296875,
"int_score": 3
} |
The Basket Star is characterized by five tentacles, each branched several times, with which it clings to the branches of the soft corals and which are opened at night to feed on. The body has a diameter that can reach 8 centimeters, with the complete opening of the tentacles the maximum width of the entire animal can reach 80 centimeters. It feeds passively with the open tentacles, in particular of planktonic microparticles.
Belt of Venus or girdle of Venus (Cestum veneris Lesueur, 1813) is a species belonging to the phylum Ctenophora, the only species of the genus Cestum. Ctenophora Cestus veneris Venus girdle www.intotheblue.it ...
In this dive made some time ago on a depth ranging from 40 to 50 meters deep we filmed the usual ghost nets lost close to a cliff at a distance of about 5 miles from the coast. We spent almost the entire dive on the rocks that despite the conditions seemed quite vital, ...
The Red Lionfish (Pterois volitans) is a fish venomous who live coral reef in the family Scorpaenidae, order Scorpaeniformes. Pterois volitansis natively found in the Indo-Pacific region, but has become an invasive problem in the Carribbean Sea, as well as along the East Coast of the United States.
The Spotted sea hare, Aplysia dactylomela, is a species of large sea slug, a marine opisthobranch gastropod in the family Aplysiidae, the sea hares. As traditionally defined, this species of sea hare was cosmopolitan, being found in almost all tropical and warm temperate seas, including the Mediterranean Sea where first seen in 2002 and likely self-established due to increasing temperatures.
The marine environment is affected by all kinds of pollution that has always been caused by human activities but from the industrial era to the present day it has developed exponentially to the point that we may consider not to return. A human activity in direct contact with the sea is certainly represented by professional fishing. Fishing is generally practiced by trawl nets or fishing net. The fishing net net is certainly the most compatible because it is practiced selectively and more respectful to the environment.
The Elephant Ear Sponge, (Spongia agaricina, Spongia lamella), resembles a relatively flat bowl. In the lower part it narrows, to form the point of attachment to the rock. The measure ranges from 14 to 80 cm in diameter with a thickness ranging from 1 to 4 cm approximately. It is an endemic species of the Mediterranean Sea that generally lives at depths between 15 and 50 meters, but we can find it up to 150 meters deep. | <urn:uuid:e303e3f7-e9d6-4993-8c71-99a6ce2608b7> | {
"dump": "CC-MAIN-2023-40",
"url": "https://www.intotheblue.link/en_GB/2022/03/",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233510501.83/warc/CC-MAIN-20230929090526-20230929120526-00380.warc.gz",
"language": "en",
"language_score": 0.9421310424804688,
"token_count": 574,
"score": 2.640625,
"int_score": 3
} |
- Science can be all periodic tables and textbooks - or it can be fun.MHLT Elementary students showed just how much fun it can be tonight at their Science Fair.You don't expect to hear laughter in a science lab, but Professor Gizmo got students and parents laughing and engaged with his wacky experiments.Principal Rob Way says that kind of learning is important.
"Science brings out the natural curiosity in kids. It's so important for kids to have that joyful, rich learning environment," Way said. "Kids are able to learn about all their subject areas - math and reading and social studies - through science. It's a great avenue to bring together learning."Students agree. They showed off projects to friends and family.Fifth grader Zoe Botes is working on an experiment with chicken and ostrich eggs."I like doing the big projects like this and it's fun having to show people and you parents the projects that you've worked hard on," she said. "Especially showing them the ostrich eggs and the chicken eggs and that they're going to hatch and everything. So it's very fun."MHLT has hosted an annual Science Fair since 1997.
Story By: Lex Gray | <urn:uuid:2871ff27-6951-432a-b395-74fd22b365d2> | {
"dump": "CC-MAIN-2016-07",
"url": "http://www.wjfw.com/email_story.html?SKU=20130404205620&textsize=large",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701962902.70/warc/CC-MAIN-20160205195242-00235-ip-10-236-182-209.ec2.internal.warc.gz",
"language": "en",
"language_score": 0.9774315357208252,
"token_count": 246,
"score": 3.453125,
"int_score": 3
} |
I am going to share with you all an important type of gadget that any musician or any electronic enthusiast would like to have. It is an active sound mixer, and remember i am talking about active sound mixers, not passive one. I searched instructables and found out that there is not much of the topic discussed. I could only find projects on passive mixers and i couldn't any active mixer projects discussed here in instructable. So therefore i thought of taking up this project and document it step by step from beginning on.
So lets start by explaining what actually Active Mixers and Passive Mixers are? Passive mixers are those mixers which helps to mix the signals with the help of only some resistant. They main component is the mix resistor and it works but the sound mix would not contain quality. It does not contain any active elements to mix the audio signal from the inputs. It just acts as a divider circuit and there is not much of quality mix.
Therefore we would want to have an active mixer, An active mixer has various electronic components in the circuit and it would help to distribute the audio signal in a balanced manner. The output from the mixer would be matched to feed it to any power amplifier. Therefore you would like to have an active mixer than to have a passive mixer for any quality audio.
So now i will be explaining how to build you own Three channel active mixer with a very few electronic components that i will be showing you here in my project.
Teachers! Did you use this instructable in your classroom?
Add a Teacher Note to share how you incorporated it into your lesson.
Step 1: Parts You Will Be Needing
- 100K Variable Resistor (4 Nos)
- 1K5 (1 Nos)
- 100K (1 Nos)
- 5K6 (1 Nos)
- 10uF/ 10V
- 0.1uF Ceramic Capacitor (3 Nos)
- 10uF/16V Electrolyte Capacitor (1Nos)
- 3.5mm Mono Sockets
- SPST Switch (3 Nos)
- PVC box
Step 2: Circuit Overview
The main component used here in the circuit is the transistor, It will intelligently divide the sound signal to a source. It has three variable resistors for controlling the input sound signal. The signal is then fed to the ceramic capacitors that would block any DC voltage and only allow the signal to pass through the line. The ceramic capacitor is very useful as because there might be DC voltage either present in the signal or to the mixer itself, so this will stop the voltage, thereby protecting the source signal, as the source signal might be coming from an expensive smart phone or an expensive laptop or anything very valuable of cost.
The signal is then fed to the mix resistors that would take the signal to the transistor that would mix the sound signal with high gain and quality. Finally at the end there is a filtering capacitor that would filter out any unwanted signal arising from the mix of the transistor and the DC voltage, so the output is received from the filtering capacitors and this signal can be directly fed to any power amplifier circuit.
Step 3: Compiling the Parts in the PCB
Lets start by using a general PCB. Start by fist soldering the ceramic capacitors as while doing so you would also get the feel how the signal would pass in the mixer. Try to make a mental picture while soldering, taking view of the Circuit diagram. After the ceramic capacitors are soldered. Then solder the main part the transistor, Be careful not to over heat the transistor while soldering. After the transistors is soldered, put on the rest two of the biasing resistors. Finally solder the filter capacitor and more or less you mixer board is ready for further transformation.
Now as you have finished the soldering part, now comes the work of wires. It may get really messy with the wires, but try to be neat and bunch up the wires evenly. I would recommend to use audio wires as there would be less noise if you use audio wires. Audio wires are wires that have a shield covering the outer part as to make the audio signal free from interference. A simple example of It is a good practice and i generally use such type of wires. Remember to ground the shielded wires.
Finally take out the two leads for Positive and negative. Here the positive is the blue wire and white wire is the negative. The red wires coming out from the board are shielded wires as because i want to look like more professional and eliminate the extra sound as less as possible.
Step 4: Putting It Into a Neat and Clean Enclouser
I Started by choosing a PVC box, as because it is very handy and holes can be made to this box very easily. I started by drilling small holes for the 3.5mm socket, three holes at a constant interval. I then added the socket and screwed it firmly to the base (as shown in the picture 1).
Again i drilled holes for the Variable resistors (100K) with the given intervals so that look professional. I just drilled the three holes directly above the 3.5mm socket so that one can find out which channel they will be controlling.(Picture shown 2).
There was more drilling now for the switches that would be inserted to select the inputs as needed. It was again drilled just above the holes drilled for the variable resistors. (Picture 4) and in the middle of the switches, a single hole was drilled for the main volume control.
A finished upper part of the unfinished looks like in the picture 5. The shot is taken from the bottom angle.
Now it was simple process of fitting the variable resistors and switches in their respective places and bolting them up, please follow the pictures (6,7,8,9). It is very simple just screw in the bolts of the variable resistors with a nose or a monkey pliers firmly it the front end of the PVC box.
In the final picture 10 you will be able to see all the variable resistors at their places and now only we need to add in the switches.
In Picture 11 you will be able to see the back of the PVC box where all the variable resistors are fixed to the box and now they are ready to be soldered to the respective connections.
And finally in picture 12 you will be able to see the switches placed to the respective holes and now you can see the top of the three channel audio board. On top of the board is the master volume control. It will be responsible for the overall output to the power amplifier. Just below the master volume control are the three channel selector switches responsible for selection the audio channels. and below the selector switch are the individual channel volume control which will be engaged for varying the input audio for the mixer. It is partly finished and looking great for further alterations.
Step 5: Wiring the Components
After every thing is screwed to the PVC box, now we will complete the instructable with wiring the variable resistor and the 3.5mm socket to the main mother board that we have prepared.
We will start by soldering the stereo socket and then gradually finish up with wiring the variable. In doing so, you will have a rough idea on how the audio feed is getting in and will be distributed over the circuit for mixing purposes.
Soldering is pretty simple just go with the schematics and you will find you way.
At the end the finished product will be looking as shown in the figure and you will only need a power supply for the mixer to power it on. You can do this by simply taking a 5v DC power from any power source and you are ready to mix. Here in my mixer my power supply is from the Blue and White wire coming out from the mixer as shown in the figure. You can add a maximum of three (3) inputs and test you brand new active mixer.
The quality you get with this mixer is pretty good and it works really fine. I generally use this mixer when i am monitoring various frequency in the air, while i am listening to various audio programs. This gadget is a must have for every electronic enthusiast.
Participated in the
Weekend Projects Contest | <urn:uuid:88f8ad2b-8a63-48fb-a90e-3c4fb0370844> | {
"dump": "CC-MAIN-2019-35",
"url": "https://www.instructables.com/id/Active-Sound-Mixer/",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027316150.53/warc/CC-MAIN-20190821174152-20190821200152-00235.warc.gz",
"language": "en",
"language_score": 0.9457772970199585,
"token_count": 1696,
"score": 3.171875,
"int_score": 3
} |
October 10, 1942
The Senate Elects a Chaplain
When the Senate of 1789 convened in New York City, members chose as their first chaplain the Episcopal bishop of New York. When the body moved to Philadelphia in 1790, it awarded spiritual duties to the Episcopal bishop of Pennsylvania. And when it reached Washington in 1800, divine guidance was entrusted to the Episcopal bishop of Maryland.
During its first 20 years, the Senate demonstrated a decided preference for Episcopalians. Among the initial 12 chaplains were one Presbyterian, one Baptist, and 10 Episcopalians.
Through the nineteenth century, Senate chaplains rarely held office for more than several years, as prominent clergymen actively contended for even a brief appointment to this prestigious office. With the twentieth century, however, came year-round sessions and the need for greater continuity. The office became less vulnerable to changes in party control. Appointed by a Republican Senate in 1927, Reverend Z. T. Phillips—the Senate's 19th Episcopalian—continued after Democrat's gained control in1933, serving a record 14 years until his death in May 1942.
On October 10, 1942, the Senate elected its 56th chaplain, the Reverend Frederick Brown Harris. The highly regarded pastor of Washington's Foundry Methodist Church, Harris failed to survive the 1947 change in party control that led to the election of the Reverend Peter Marshall. When Marshall died two years later, the Senate invited Reverend Harris to resume his Senate ministry. With his retirement in 1969, Harris set the as-yet-unchallenged service record of 24 years.
More than any of his predecessors, Harris shaped the modern Senate chaplaincy. Members appreciated the poetic quality of his prayers. On learning of President John F. Kennedy's assassination, Harris went immediately to the Senate Chamber. He later recalled, "The place was in an uproar. Senate leaders Mike Mansfield and Everett Dirksen asked me to offer a prayer. I called upon the senators to rise for a minute of silence, partly because of the gravity of the tragedy, but partly to give me a minute more time to think of something to say."
Borrowing from the poet Edwin Markham, he said, "This sudden, almost unbelievable, news has stunned our minds and hearts as we gaze at a vacant place against the sky, as the President of the Republic, like a giant cedar green with boughs, goes down with a great shout upon the hills, and leaves a lonesome place against the sky."
Harris, Frederick Brown. Senate Prayers and Spires of the Spirit. Edited by J. D. Phelan. St. Louis: Bethany Press, 1970.
Whittier, Charles H. Chaplains in Congress. Washington, DC: Congressional Research Service, Library of Congress, Report 90-65 GOV. 1990. | <urn:uuid:b0b32a43-f239-466e-bf94-09d84480e1e4> | {
"dump": "CC-MAIN-2015-32",
"url": "http://www.senate.gov/artandhistory/history/minute/The_Senate_Elects_A_Chaplain.htm",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042988311.72/warc/CC-MAIN-20150728002308-00123-ip-10-236-191-2.ec2.internal.warc.gz",
"language": "en",
"language_score": 0.961484968662262,
"token_count": 585,
"score": 2.6875,
"int_score": 3
} |
The movie Mean Girls, which was released in 2003, tells the story of Cady, a teenage girl who was homeschooled until attending a public high school. The movie begins by presenting two common stereotypes of homeschoolers: a girl with glasses, braces, and long braids winning a spelling bee, and five tow-headed boys wearing overalls and sitting on hay bales, saying in unison, “and on the third day, God created the Remington bolt-action rifle, so that man could fight the dinosaurs and the homosexuals.” This portrayal reflects common stereotypes about homeschoolers, but it is perhaps just as important to note that these images are only presented so that Cady can reject them, declaring herself not like “those” homeschoolers.
Ideologues and Pedagogues
In her 1991 article “Ideologues and Pedagogues: Parents Who Teach Their Children at Home,” Jane Van Galen, a sociologist, argued that homeschooling parents were divided into two camps, which she called “ideologues” and “pedagogues.” According to Van Galen, the ideologues, which comprise the larger group, were Christian fundamentalists who objected to what they believed the public schools were teaching and wanted to instill their conservative political and religious beliefs in their children. Pedagogues, in contrast, homeschooled because they believed that children learned more naturally apart from formal schooling, which they believed stifled children’s innate curiosity and creativity.
Van Galen argued that ideologues’ and pedagogues’ different motivations and viewpoints affected nearly everything about how they homeschooled: ideologues saw government regulation of homeschooling as the encroachment of “secular humanism” while pedagogues are less troubled by such intervention; ideologues often use structured curricula and strict discipline with their children while pedagogues are more likely to try creative and innovative techniques, releasing their children from desks and workbooks. Van Galen developed her conceptions of the two groups over the course of a year and a half spent meeting and speaking with homeschooling families, and her interpretation of homeschooling as a movement made up of two distinct groups is echoed in later scholarship.
Believers and Inclusives
Mitchell Stevens, a sociologist, spent almost ten years studying homeschoolers in Illinois before publishing his 2001 book on homeschooling, Kingdom of Children: Culture and Controversy in the Homeschooling Movement (Stevens, 2001). In his book, he looks in depth at the lives of homeschool families in Illinois, analyzing what he came to see as two distinct groups of homeschoolers and tracing the growth of national organizations as well as clashes between the two camps. Stevens argues that homeschooling is a social movement made up of a wide spectrum of individuals, but that most homeschoolers nevertheless fall into one of two groups, which he terms the believers and the inclusives. In his book, he sets out to determine who homeschoolers are and how this split occurred.
Stevens examines survey data on homeschoolers and then turns to the history of the movement, beginning with John Holt, an educational reformer who rebelled against formal schooling, and Raymond Moore, who taught that children were developmentally better off being educated at home for their first few years. Stevens carefully compares these two men’s views of the child: Holt believed in liberating the essential child and Moore believed in protecting the fragile child. These distinctions help to illuminate the difference between Stevens’ believers, who want to protect and nurture their children in what they believe is truth, and his inclusives, who want to set their children free to explore and create.
Stevens also looks at homeschool curriculum publishers, conventions, speakers, and organizations, both local and national. Stevens argues that the believers and the inclusive each formed their own organizations separate from each other, and that these organizations reflected the core difference between the two groups. The believers’ organizations were well-organized and hierarchical while the inclusives’ organizations were loosely-knit and democratic. Stevens looks at controversy and tension between the two groups’ organizations and argues that the believers’ came to dominate the homeschool world because of their better organization and mobilization. Stevens says that throughout the 1980s and 1990s, the number of Christians homeschooling increased dramatically, and that some inclusive resented what they saw as a takeover of their movement.
“Closed Communion” and “Open Communion”
In 2008, Milton Gaither, a historian of education, published the first historical treatment of the homeschool movement (Gaither, 2008). He begins with the colonies and traces the tradition of home education throughout the entirety of American history. Gaither distinguishes between “home schooling” and “homeschooling,” arguing that the home schooling is merely an educational option, as it was in the early American history and is becoming again today, while homeschooling is a deliberate alternative to and rejection of institutional schooling. Gaither traces the history of education in the home through four stages: government-encouraged home education in the colonies, the gradual eclipsing of the home by the public school, the antagonism between home and school that arose with the modern homeschool movement, and the hybridization of the home and school that he believes is taking place today.
Gaither goes into great depth regarding why the modern homeschooling movement emerged in the 1970s, and comes up with four reasons: countercultural sensibility becoming American sensibility, suburbanization that created a place for homeschooling to take place, the idealization of the child among both the left and right, and changes in public schools and families. Gaither examines the roots of the homeschool movement in the leftist hippie counterculture and in the new right fleeing the perceived teaching of secular humanism in public schools, arguing that both of these groups were intentionally rejecting institutional schooling, though for different reasons.
Gaither sees homeschooling as a grassroots movement and traces the growing fault lines between the two types of homeschoolers as support groups sprang up. While Van Galen called the two groups “ideologues” and “pedagogues” and Stevens called them “believers” and “inclusives,” Gaither calls the two groups “closed communion” and “open communion.” He chooses this terminology because conservative Christian homeschoolers who were intentionally leaving the “ungodly” public schools didn’t want to simply exchange one evil for another by joining support groups together with “ungodly” homeschoolers, and thus formed support groups that were “closed communion,” demanding adherence to statements of beliefs. According to Gaither, by 1990 the vast majority of homeschoolers were conservative Christians.
Gaither examines the various leaders of the homeschool movement and presents a fascinating look at the adversity between national and state open communion and closed communion homeschool groups, as well as the infighting that took place from time to time among various leaders in the closed communion community. Turning to the impact of John Holt and Raymond Moore on the homeschool movement in the 1970s and 1980s, Gaither adds a third influential figure: Rousas Rushdoony. He argues that Rushdoony, a Christian theologian and advocate of the homeschool movement, shaped Christian homeschoolers through his providentialist view of history, his reconstructionist politics, and his idea that the nation is mired in a conflict between a Biblical worldview and secular humanism. In addition, Gaither looks at the background of each of the various homeschool leaders who arose in the mid 1980s and 1990s, including Michael Farris, Brian Ray, Sue Welch, and Greg Harris, and at their impact on the homeschool movement.
Gaither finishes his book by asserting that, even as a still increasing number of Christians join the homeschool movement (in 2002 James Dobson called for all Christians to immediately remove their children from public schools), the movement itself was becoming accepted and mainstream. Gaither also looks at the growth of charter schools, cybercharters, and growing cooperation between homeschoolers and the schools. Homeschooling, he argues, is set to return to being “home schooling,” merely an accepted educational option. Gaither’s look at the homeschool movement is fascinating and informative, and will remain the definitive historical work on the movement for years to come.
Complicating the Picture
The most recent addition to scholarly literature on homeschooling is Jennifer Lois’ 2012 Home Is Where the School Is (Lois, 2012). In contrast to earlier scholars, Lois focuses specifically on homeschooling mothers. Perhaps the most notable thing about her work is that she categorizes these mothers slightly differently than previous scholars. Rather than dividing them into ideologues and pedagogues or believers and inclusives, she divides them into “first choice” and “second choice” homeschoolers. First choice homeschoolers, she says, are mothers who feel that they are called to homeschool, whether for conservative religious reasons or progressive pedagogical reasons. In fact, Lois’ work seems to suggest that both types of mothers similarly find root for their choice to homeschool in their common identities as mothers. Second choice homeschoolers, in contrast, are those who come to homeschooling after other educational methods fail their children. For these mothers, homeschooling is not an identity but rather a temporary educational options. Lois finds that first choice homeschooling mothers report higher levels of satisfaction and that second choice homeschooling mothers are likely to look forward to the day when their children are grown or back in school.
In many ways, “second choice homeschoolers” is simply another label for a group described in Rachel Coleman‘s 2010 master’s thesis, a history of a local homeschool community—the “pragmatics.” And indeed, Lois and Coleman both give credence to Gaither’s suggestion that as homeschooling becomes more and more accepted it will become simply one more educational choice rather than what what amounts to an act of protest. In other words, pragmatic homeschoolers come to homeschooling because it’s what works best for them and their children at that point in time, rather than because they believe either that institutional schooling is fundamentally flawed or that they are called by God to train up Christian children unsullied by the influences of the world.
There’s another complication here as well. As Eric Isenberg point out in a 2007 article, all of this dividing and categorizing is easier to do in studies that involve getting to know homeschooling families in an ethnographic way than it is when looking at homeschoolers quantitatively (Isenberg, 2007). Isenberg points out that there are numerous part-time homeschoolers, short term homeschoolers, and parents who homeschool one child but not another, information that seems to suggest that there is something to what Lois has called “second choice” homeschoolers and what Coleman called “pragmatics.” Further, Isenberg says that while the three main reasons people give for homeschooling are moral/religious, academic, and environmental (i.e. concern about the school environment), drawing conclusions from these numbers is difficult because there is overlap that makes differentiating between those homeschooling for religious and secular reasons can be complicated and tricky to quantify.
Of course, Isenberg does not reject entirely the idea that there are fundamental groupings of homeschoolers. He points out that religious homeschoolers are more likely to homeschool all of their children and significantly more likely to homeschool long term, suggesting the enduring importance of the believers. Isenberg also notes that public and private school options become more attractive and homeschooling less attractive in areas with large concentrations of evangelical Protestants, once again pointing to the importance of the believers. Further, Isenberg suggests that the questions in the survey data that he examines were not well designed—even a nonbeliever could mark that they homeschool to give their child a moral or religious education, for example—meaning that differences that may be more apparent to researchers like Stevens or Gaither may be obscured in the survey data. This suggests that we both need better survey data and also need to not underestimate the importance of actual field work.
Article published December 2013. | <urn:uuid:3d34f424-a3eb-46ec-a3ff-fd13a918758e> | {
"dump": "CC-MAIN-2017-17",
"url": "https://www.responsiblehomeschooling.org/homeschooling-101/how-have-scholars-divided-homeschoolers-into-groups/",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917121869.65/warc/CC-MAIN-20170423031201-00268-ip-10-145-167-34.ec2.internal.warc.gz",
"language": "en",
"language_score": 0.9744370579719543,
"token_count": 2587,
"score": 3.421875,
"int_score": 3
} |
Grey Headed Flying Fox
The Grey-headed Flying-fox is a threatened species in Victoria.
Known for their fly-outs at sunset, you may find flying foxes visiting your backyard, orchards and nearby parks to feed.
The grey-headed flying fox is easily recognisable by its reddish-coloured collar and grey head.
Flying-foxes are intelligent mammals, with excellent night vision and an acute sense of smell that helps them find nectar and navigate their way along the Australian coastline. Their legs have small muscles, which make them light enough to fly, but this means they are not strong enough to stand upright. Social and at times very noisy, flying-foxes have over 30 distinct calls they use to defend their territory, find their young and attract mating partners.
Grey-headed flying fox colony numbers fluctuate with the season – there are usually more numbers in summer than winter.
The Department of Environment, Land, Water and Planning offer the following tips for living with flying-foxes:
- If you choose to net your fruit trees to protect them from birds and flying-foxes, please use wildlife-safe netting. Flying-foxes and other animals are easily entangled in netting with holes larger than 5mm x 5mm and it is the leading cause of death and injuries for flying-foxes in urban areas. See the DELWP Fruit tree netting and wildlife fact sheets for more information.
- Flying-foxes can also get caught on barbed wire. If you have fences including barbed wire on your property, consider painting it a light colour or taping on plastic bags to make it more visible.
- Please do not approach flying-foxes or attempt to touch them yourself. A small percentage of flying-foxes carry Australian Bat Lyssavirus, which is similar to rabies. If you are concerned about the welfare of flying-foxes in your area, contact a local wildlife carer who is trained to handle bats.
- If you find a lifeless flying-fox, do not touch it. Please contact a local wildlife rescue organisation for assistance.
- If you are bitten or scratched by a flying-fox, thoroughly wash the wound, apply an antiseptic solution and see your doctor immediately.
- Even though no flying-fox to dog or cat transmission of disease has been recorded, dogs and cats should be kept away from flying-fox roost sites where possible.
For more information visit the DELWP website. | <urn:uuid:98cfa974-fd8c-45fd-aa09-3031b6d6f466> | {
"dump": "CC-MAIN-2022-40",
"url": "https://www.colacotway.vic.gov.au/Environment-Sustainability/Grey-Headed-Flying-Fox",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337723.23/warc/CC-MAIN-20221006025949-20221006055949-00705.warc.gz",
"language": "en",
"language_score": 0.9331237077713013,
"token_count": 513,
"score": 2.9375,
"int_score": 3
} |
Running down "do-gooders" has become a popular pastime in recent years. Lampooning, criticizing and even attacking philanthropists for their charitable activities has become sport for journalists and academics alike. Big donors have been subjected to specific vilification as their acts are characterized as a means to self-aggrandisement or tax evasion. Yet, it is widely acknowledged that philanthropy has played a critical role in both developed and developing societies from the establishment of Carnegie Libraries in Victorian England to the global health interventions of the Gates Foundation. Arguably, without philanthropists – big or small – society would be greatly impoverished and projects beyond the scope of government and the market would never receive funding.
In an impassioned defence of the role of philanthropy in society, Beth Breeze tackles the main critiques levelled at philanthropy and questions the rationale for undermining, disparaging and trivialising philanthropic acts. She contends that although it might be flawed, philanthropy is a sector that ought to be celebrated and championed so that an abundance of causes and interests can flourish.
Introduction 1. What is philanthropy? 2. Philanthropy under attack? 3. The academic critique 4. The insider critique 5. The populist critique 6. How and why do attacks on philanthropy stick? Conclusion: in praise of philanthropy
Beth Breeze is Director of the Centre for Philanthropy and Reader in Social Policy at the University of Kent. | <urn:uuid:465290e5-250b-4f31-8a07-f40766352ccb> | {
"dump": "CC-MAIN-2021-25",
"url": "https://www.agendapub.com/books/119/in-defence-of-philanthropy",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488540235.72/warc/CC-MAIN-20210623195636-20210623225636-00384.warc.gz",
"language": "en",
"language_score": 0.9529094099998474,
"token_count": 292,
"score": 2.703125,
"int_score": 3
} |
Medicated Stents Reduce Heart Attacks by Delivering Medication Downstream
patients receiving drug-coated stents do better than patients receiving bare metal stents
15, 2011, Cardiac patients receiving medicated stents a procedure that occurs often when blood vessels are blocked have a lower likelihood
of suffering heart attacks or developing new blockages in the vessel downstream from the stent, according to researchers at Cleveland Clinic.
Stents have been used to prevent re-narrowing of coronary arteries after balloon angioplasty and newer designs have
included coatings with medications to prevent re-narrowing from occurring within the stent after implantation.
The recent study led by Richard Krasuski, M.D., Director of Adult Congenital Heart Disease Services and a staff
cardiologist in the Department of Cardiovascular Medicine at the Miller Family Heart & Vascular Institute at Cleveland Clinic suggests that
these medicated stents may deliver the medication to the vessel beyond the stent.
In a study recently published in the American Heart Journal, Dr. Krasuski and his colleagues demonstrate that
patients receiving medicated stents have a lower likelihood of suffering heart attacks or developing new blockages in the vessel downstream
from the stent.
"Though there have been concerns about clots forming inside drug-releasing stents, the totality of data suggests that
patients receiving drug-coated stents do better than patients receiving bare metal stents," Dr. Krasuski said.
"It has not been clear before, however, why preventing re-blockage in the location of a stent would have such a large
benefit, but our study suggests that there may be more that the stent is doing.
When blood flows through the stent, medication not only reaches the vessel it is touching but likely the distal vessel
as well. In this way it could be having a much more profound effect on the vessel."
If this concept is confirmed it could revolutionize treatment of cardiovascular disease and problems with other organ
systems as well. Stents could be altered to deliver many different medications in small amounts directly to the blood vessels. This could
maximize the benefits of different drugs and reduce their toxic effects as well as improve patient compliance.
About Cleveland Clinic
Celebrating its 90th anniversary, Cleveland Clinic is a nonprofit multispecialty academic medical center that integrates
clinical and hospital care with research and education. More at
What Is a Stent?
A stent is a small mesh tube that's used to treat narrowed or weakened arteries in the body. Arteries are blood vessels
that carry blood away from your heart to other parts of your body.
You may have a stent placed in an artery as part of a procedure called
angioplasty (AN-jee-oh-plas-tee). Angioplasty restores blood flow through narrowed or blocked arteries. Stents help prevent the arteries
from becoming narrowed or blocked again in the months or years after angioplasty.
You also may have a stent placed in a weakened artery to improve blood flow and to help prevent the artery from bursting.
Stents usually are made of metal mesh, but sometimes they're made of fabric. Fabric stents, also called stent grafts, are
used in larger arteries.
Some stents are coated with medicines that are slowly and continuously released into the artery. These stents are called
drug-eluting stents. The medicines help prevent the artery from becoming blocked again.
You may be eligible for money damages if you owned or leased one of these VW, Porsche or Audi vehicles.
In the major scandal of 2015, Volkswagen cheated you and the world. They rigged diesel emission controls so you, nor regulators, would know how much pollution their cars were adding to our environment.
They were caught and have reserved $7.3 billion to help "make it right" with victims.
If you owned or leased one of these vehicles, contact us now.
Janicek Law attorneys are actively pursuing these cases against VW. Do Not Wait... | <urn:uuid:0314b98e-5de5-4805-a93c-3427c2d2619d> | {
"dump": "CC-MAIN-2016-30",
"url": "http://www.seniorjournal.com/NEWS/Health/2011/20110915-MedicatedStents.htm",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257824757.62/warc/CC-MAIN-20160723071024-00296-ip-10-185-27-174.ec2.internal.warc.gz",
"language": "en",
"language_score": 0.9494501948356628,
"token_count": 854,
"score": 2.859375,
"int_score": 3
} |
Because renewable energy technologies like wind and solar don’t always produce energy when we want them to, it’s often argued that we’ll have to store wind and solar energy in giant batteries or other forms of grid energy storage before we can fully transition the electricity system toward renewable sources. Energy storage is touted as the “holy grail” that will unleash renewable energy and allow it to fully compete with its nonrenewable counterparts.
While there’s no doubt that energy storage can help integrate renewable energy with the grid, a recent study by Eric Hittinger of the Rochester Institute of Technology and Inês Azevedo of Carnegie Mellon University indicates that bulk energy storage would most likely increase total U.S. electricity system emissions if it were installed today, because it would typically store electricity generated from fossil fuels rather than renewable sources.
So what’s going on here? On the one hand energy storage is the “holy grail” for renewable energy. On the other hand, experts say storage could increase emissions. With this post I’ll explain how energy storage influences total emissions from the electricity system, and why researchers say energy storage could be bad for emissions in the short term.
Unlike a conventional power plant, most forms of energy storage don’t produce any on-site emissions. There’s no smokestack or combustion associated with conventional pumped-hydroelectric energy storage or emerging battery systems. Rather, most emissions from grid energy storage are caused by the upstream and downstream effects that storage has on the wider grid.
One of the key applications for energy storage is charging when electricity demand and the wholesale electricity market price are low and then discharging when electricity demand peaks and the wholesale price is higher. Because the energy storage plant buys electricity when it’s cheap and then sells electricity when it’s expensive, it accumulates revenue that can be used to pay off its upfront capital cost.
The effect that an energy storage plant operating in this way has on total emissions depends on what type of power plants turn on to fulfill the new demand caused by the storage plant charging, and what type of generators are offset when the storage plant discharges. If energy storage charges with wind energy and offsets coal, it’s definitely good for emissions. If it charges with coal and offsets natural gas, then it’s definitely bad for emissions. But which one of these situations is most likely?
The goal of Hittinger and Azevedo’s study was to predict which generators would be used to charge energy storage and which generators would be offset as energy storage discharges. To predict when energy storage would charge and discharge, they modeled how an energy storage plant would economically respond to price fluctuations in various U.S. electricity markets. Then, they used data showing which generators are likely to respond to a change in electricity demand at various times of day to predict which generators would come online to fulfill additional demand as energy storage charges and which generators would be offset when energy storage discharges.
The results of the study indicate that an energy storage plant that responds to electricity market price signals in an optimal way would typically charge with coal electricity, and then offset peaking natural gas generation, because the market price of natural gas electricity is typically higher than the price of coal electricity. Because burning coal produces about twice the greenhouse gas (GHG) emissions of burning natural gas, total GHG emissions would likely increase—even if energy storage operates with 100 percent round-trip efficiency. When you add in energy losses associated with the energy storage conversion processes, the situation gets even worse.
Despite the typical notion that energy storage is a “green” technology, Hittinger and Azevedo’s study indicates energy storage likely wouldn’t charge with electricity generated from renewable sources. There’s no incremental cost associated with producing electricity from wind or sunshine, so unlike a conventional power plant a wind or solar farm is almost never dialed back due to low electricity demand. Thus, wind and solar couldn’t suddenly increase their output to charge energy storage. Rather, a coal plant would most likely dial up its output to meet the new demand for electricity from energy storage.
Simply put, energy storage that participates in today’s electricity market in an economic way would most likely store electricity from coal plants, and then undercut peaking natural gas plants, causing total emissions to increase.
While energy storage operating on today’s grid would likely increase total emissions, it’s important to remember that the emissions associated with energy storage are almost wholly associated with the local mix of electricity sources—and this mix is subject to change. As the amount of renewable energy installed on the electric grid increases, there will be a growing number of occasions where renewable energy production must be forcibly curtailed to prevent overloading a transmission line, shutting off a “reliability must-run” power plant, or destabilizing the power grid. During these occasions, energy storage could charge with renewable energy that otherwise would not have been delivered to the grid, and then discharge later in the day to offset a coal or natural gas power plant, causing total greenhouse gas emissions to decrease. However, this sort of scenario is rare on today’s grid with its small share of renewable energy.
As state and federal policymakers consider energy storage incentives, mandates, or demonstration programs, they should critically consider the particular impact that energy storage might have on emissions. Unlike renewable energy, energy storage in the form of batteries or other technologies is not a definitive good thing for the climate, so it shouldn’t be treated like it is. Energy storage that enables renewable energy in regions where it is constrained (e.g. Hawaii) can help to reduce carbon emissions—but energy storage that simply stores off-peak fossil fuel electricity will almost certainly increase carbon emissions.
Reference: Hittinger and Azevedo, 2015. | <urn:uuid:1da28b33-0135-4bd8-8b39-6d5ff5de7144> | {
"dump": "CC-MAIN-2021-43",
"url": "https://blogs.scientificamerican.com/plugged-in/study-indicates-bulk-energy-storage-would-increase-total-u-s-electricity-system-emissions/",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323584567.81/warc/CC-MAIN-20211016105157-20211016135157-00297.warc.gz",
"language": "en",
"language_score": 0.930858314037323,
"token_count": 1218,
"score": 3.453125,
"int_score": 3
} |
Who Actually Invented The Wheel?
Historically speaking, wheels are a much newer development than you might expect. The oldest recovered specimen is a wooden Slovenian model built sometime between 5,100 and 5,350 years ago. By then, humans had already been practicing agriculture for several millennia—in fact, farming may date all the way back to 12,000 BCE. Canoes and animal domestication also vastly predate the wheel.
Why did this invention take so long to get rolling? Well, from a vehicular standpoint, spinning wheels are basically useless unless they’re attached to a secure shaft of some sort. It was only after mankind finally built such stabilizers—which we now call “axles”—that the wheel began realizing its full potential. “The wheel-and-axle concept was the real stroke of brilliance,” says anthropologist David Anthony. That idea required extreme finesse, which only metal tools could adequately provide. However, these didn’t become widespread until around 4000 BCE, hence our delay.
Slovenia’s aforementioned artifact emerged from the Ljubljana Marshes back in 2002. With a 27.5-inch radius, it was presumably one of two wheels affixed to an ancient pushcart. Yet, impressive as the relic is, a Polish pot—made anywhere from 5650 to 5385 years ago—upstages it. Sketched upon this container is a crude wagon, thought by many to be the first artistic depiction of wheeled transportation.
Back in those days, Northern Europe was populated by what archaeologists call “The Funnel Beaker Culture.” Sophisticated agriculturalists, these people just might have been the first to construct true wheels. Other candidates include the Mesopotamians and the largely-sedentary Cucuteni-Tripolye culture. That latter group built small, four-wheeled toys in modern-day Ukraine, Moldova, and Romania.
Ultimately, it’s possible that many groups independently invented the wheel. Ancient Mesoamericans, for example, also produced little wheeled figurines despite having no known contact with their old world counterparts. However, the western hemisphere suffered from a near-total lack of domestication-ready animals capable of pulling carts. Thus, full-sized wheels don’t appear to have become popular on either American continent before overseas invaders started showing up. | <urn:uuid:c2708fd5-e3b4-4891-880d-b5561644aec2> | {
"dump": "CC-MAIN-2018-09",
"url": "http://mentalfloss.com/article/62357/who-actually-invented-wheel",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891812938.85/warc/CC-MAIN-20180220110011-20180220130011-00270.warc.gz",
"language": "en",
"language_score": 0.9583653211593628,
"token_count": 503,
"score": 3.921875,
"int_score": 4
} |
TULSA - For months we've been looking into a large earthquake that shook the state this past November.
Scientists shared new information with us about its possible cause.
While thousands of earthquakes were being tracked in Lincoln County last year, thousands of gallons of wastewater were being injected into wells nearby.
Once water is used to help get oil and gas out of a rock, that water is later disposed of and goes into an injection well a mile or more deep. In Lincoln County alone there are almost 200 injection wells, that's where a 5.6 quake shook the state in November.
"There are three injection wells that are located within five kilometers (3.1 miles) of the main shock," said Dr. Steve Horton for the Center for Earthquake Research and Information.
2NEWS sent Dr. Horton injection well reports we obtained from the Oklahoma Corporation Commission.
Those reports show how much fluid is pumped into the wells on a regular basis. Horton looked at the 2010 data and the seismic activity in the area and found, "It's possible that the magnitude 5.6 was actually triggered by fluid injection," said Horton.
He recently presented his findings at the Seismological Society of America conference in San Diego, where scientists from across the county share their research, including Austin Holland, with the Oklahoma Geological Survey. He's also the state expert on earthquakes.
"I would certainly say that it's possible and we're looking into it, but at the moment the data is just very inconclusive and really doesn't suggest that. For instance, injection has continued since the earthquakes happened and the earthquakes have decreased dramatically, sort of as we'd expect in a natural process," said Holland.
Horton agrees that it's not conclusive but does believe more research needs to be done.
Meanwhile, Horton's colleague Dr. Bill Ellsworth, with the US Geological Survey, looked at man-made seismic activity across the central U.S. and is particularly interested in the increase in activity in Oklahoma, especially given the number of injection wells in the Sooner State.
"I think we're all being very careful to say this earthquake or that earthquake has been triggered, but it's a question that we need to go back now and look at where the earthquakes are and to try to understand what industrial activities might be in their vicinity to see if there's a link," said Ellsworth.
The Oklahoma Corporation Commission oversees injections in the state and has said in the past that it turns to Holland as its expert, but state representative Ron Peters wants all findings to be looked studied.
"The Corporation Commission could certainly call hearings and hear from all sides of the issue and make some decision based on that," said Representative Ron Peters, (R)-Tulsa.
While Peters says he's inclined to go with Oklahoma's expert, he told us he'll encourage the Commission to allow other experts to weigh in.
After all, Ellsworth says we're all after the same thing.
"This is a challenge for the regulators as to how they want to move forward. I think everyone is interested in developing systems that can produce this really valuable resource in a safe manner," said Ellsworth.
The Oklahoma Independent Petroleum Association has said it doesn't believe the injection wells are tied to the earthquakes and admits any moratorium on injection wells could hurt the oil industry.
Still Holland says it's something he's continuing to look into. | <urn:uuid:ce4186bb-96fb-4179-bec5-29a4ff61db8f> | {
"dump": "CC-MAIN-2014-41",
"url": "http://www.kjrh.com/news/local-news/scientist-finds-possible-cause-of-large-56-oklahoma-earthquake-in-november-2011",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1412037662910.19/warc/CC-MAIN-20140930004102-00076-ip-10-234-18-248.ec2.internal.warc.gz",
"language": "en",
"language_score": 0.9810196161270142,
"token_count": 693,
"score": 2.703125,
"int_score": 3
} |
Description: Postcard photograph of Patzcuaro Lake, also known as Atardecer Lake in Guadalajara, Mexico. The photograph is cast in relief from the setting sun disappearing behind a line of mountains that stretch across the photograph. Several clouds filter sunlight onto to the lake in an irregular pattern. A dock is just visible to the right of the photograph, slightly below where the mountain line meets the lake waters. A hand-written inscription on the front of the postcard reads "Atardecer Lago de Patzcuaro No. 19. A letter to Federica Abreu is addressed on the back.
Contributing Partner: Witte Museum | <urn:uuid:f026963b-5462-4e05-8e22-f7a6f3a22ec1> | {
"dump": "CC-MAIN-2016-07",
"url": "http://texashistory.unt.edu/explore/collections/CARPA/browse/?fq=str_location_county%3ABexar+County%2C+TX",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701159376.39/warc/CC-MAIN-20160205193919-00073-ip-10-236-182-209.ec2.internal.warc.gz",
"language": "en",
"language_score": 0.9040530323982239,
"token_count": 137,
"score": 2.578125,
"int_score": 3
} |
Because most Jewish texts of the sixteenth through the eighteenth centuries, as throughout most of Jewish history, were written in Hebrew by men for other men, we have very little direct evidence of women’s religious lives. Tkhines (Yiddish, from Hebrew tehinnot, “supplications”), private devotions and paraliturgical prayers in Yiddish, primarily for women, were published beginning in the early modern period, especially in Central and Eastern Europe and among Yiddish-speaking populations elsewhere. Written by both women and men, they were printed and reprinted again and again, thus providing evidence of their great popularity. Moreover, we can document some of the seventeenth- and eighteenth-century women who composed them. Thus, these prayers are among the most important resources for the history of the religious lives of Jewish women in the Yiddish-speaking world.
Literary setting. As is well-known, the great classics of Jewish literature were almost always written in Hebrew and Aramaic, known collectively as “the holy tongue,” by a small group of learned men, for others in the same intellectual elite. Very few women learned more than the rudiments of Hebrew, and those Central and Eastern European Jewish women who could read were usually literate only in the vernacular Yiddish. However, during the sixteenth and seventeenth centuries, new rituals and new genres of religious literature, directed at new audiences, arose among Jews. This was a result partially of certain developments within Kabbalah during this period, as well as general European religious trends, and was facilitated by the rise of printing. For the first time, book production was cheap enough for broad masses of people to have access to published materials. Thus, new genres of literature emerged for a non-scholarly audience, which included women. Guides to the ethical life, books of pious practices, new liturgies and rituals, often in abridged and simplified form, were published both in Hebrew, for an audience of men with a basic education in classical Jewish texts, and in Yiddish, for women and for men “who are like women,” i.e. those without much knowledge of Hebrew. Many of these new publications (including Hebrew tehinnot, supplemental prayers for men) developed out of and popularized a mystical pietism originating among the kabbalists of Safed; others originated among secret followers of Sabbetai Zevi (1626–1676), the failed mystical messiah. Tkhines were an important form of women’s participation in this pietistic revival and its popular literature. Indeed, this religious movement was accessible to women precisely because it was spread largely by literary means. Interestingly, however, tkhines from the end of the eighteenth century show little evidence of influence from hasidism. Hasidic teachings, especially in the early years of the movement, were transmitted orally from master to disciple in small groups that excluded women.
Nature and History of the Genre. In printed tkhine collections, each individual prayer begins with a heading directing when and sometimes how it should be recited: “A pretty tkhine to say on the Sabbath with great devotion;” “A tkhine that the woman should pray for herself and her husband and children;” “A confession to say with devotion, not too quickly; it is good for the soul;” “When she comes out of the ritual bath;” “What one says on the Eve of Yom Kippur in the cemetery;” “When the shofar is blown on Rosh ha-Shanah, say this.” Scholars are divided as to whether these prayers were meant as a women’s substitute for the Hebrew liturgy, or as supplementary, voluntary prayers, recited when women wished. Although some tkhines were intended to be recited in the synagogue, and a few were specifically for male worshipers (“A lovely prayer for good livelihood to be said every day by a businessman”) the majority were associated with women’s spiritual lives in the home or other, unspecified locations: prayers to be recited privately for each day of the week, on Sabbaths, festivals, fasts and New Moons, for the three “women’s commandments,” for pregnancy and childbirth, for visiting the cemetery, for private griefs such as childlessness and widowhood, for recovery from illness, for sustenance and livelihood, for confession of sins. While domestic concerns run through these prayers, so, too, do grander themes from Jewish thought, especially the hope for the messianic redemption and the end of exile.
Although there are manuscript tkhines, none are known that precede the appearance of the genre in print. There are two main groups of tkhines: first, those that were printed in Western and Central Europe in the seventeenth and eighteenth centuries, which, although published anonymously, were probably written or compiled by men for women; and second, those that appeared in Eastern Europe in the seventeenth, eighteenth and early nineteenth centuries, often with named authors or compilers, some of whom were women. The geographical designation refers primarily to place of printing, rather than place of composition, which is more difficult to determine, and is intended to suggest, as well, a rough periodization, with certain overlaps. The language of the tkhines (known from the seventeenth century on as “tkhine-loshn”) is relatively fixed, rather like an increasingly archaic “prayer-book English,” and displays few of the distinctive linguistic features of the developing Eastern European varieties of Yiddish; thus, linguistic analysis is of little help in determining place of composition.
Western European tkhines were published in collections of between thirty-five and one hundred and twenty prayers, addressing many topics: either in small books or as appendices to Hebrew prayerbooks, often prayer books with Yiddish translation. The first major collection, entitled simply Tkhines, was published anonymously in Amsterdam in 1648. The introduction to the work sets out the author or editor’s motivations in publishing the book:
Now our sages have composed many praises … and prayers to the Almighty God … in the holy tongue, which women usually do not understand. … It is like a blind person standing at a window and looking out at the street to see wondrous things—this is the same as women saying the tehinnot in the holy tongue and not knowing what they are saying … Thus I could not excuse myself from acceding to [women’s] desire … so that she will be able to understand the prayers … For prayer comes from the heart, and when the heart does not know what the mouth speaks, the prayer helps but little.
Numerous reprints (usually entitled Seyder tkhines), expansions and additional collections followed. In the mid-eighteenth century, Seyder tkhines u-vakoshes (“Order of supplications and petitions,” Fürth: 1762, although there may be one or two earlier editions), a comprehensive collection incorporating several earlier works, emerged and was repeatedly reprinted, with alterations, over the next one hundred and fifty years, first in Western, and then in Eastern Europe. These various Western European texts convey the holiness to be found in the domestic and the mundane, in the activities of a wife and mother, but they also depict the angels, the patriarchs and matriarchs, the male and female heroes of Jewish history, and the ancient Temple that stood in Jerusalem.
The very earliest Eastern European tkhines were published in Prague. Eyn gor sheyne tkhine (“A very beautiful Tkhine,” ca. 1600) is one of the first to claim female authorship: it is attributed to “a group of pious women.” Two other Prague imprints, one from the turn of the eighteenth century and the other from 1705, are attributed to women: Rachel daughter of Mordecai Sofer of Pinczow and Beila daughter of Ber Horowitz. Like many other Eastern European texts, all three of these Prague tkhines were short and dealt with only a single subject each, such as a tkhine “to be recited with devotion every day.” However, one notable work, Seyder tkhines (Prague: 1718), was written by a man, Mattithias ben Meir Sobotki, formerly rabbi of Sobota, Slovakia, explicitly for a female audience. “My dear women,” he writes, “… I have made this tkhine for you in Yiddish, in order to honor God and … to honor all the pious women. For there are many women who would gladly awaken their hearts by saying many tkhines.” This work contains thirty-five prayers on a variety of topics, with many prayers for the Days of Awe and for pregnancy, childbirth and infertility. Surprisingly, this male author introduces a personal subjectivity in a female voice into many of these tkhines. They are written as the prayers of women struggling with misfortune (infertility, widowhood) or danger (a husband on a hazardous journey). One of the tkhines for infertility begins: “Lord of the whole world, I, poor woman, come before you to bemoan my suffering and the sorrow I carry in my heart.” Note that this is Mattithias Sobotki speaking, imaginatively taking on a woman’s persona. Interestingly, the author’s name disappears from later editions of this work, which becomes known simply as “The Prague Tkhine [Preger tkhine].”
Except for the Prague imprints, the Eastern European tkhines were typically printed as small pamphlets, usually under twenty pages long, on bad paper with crabbed type, often with no imprint, making their bibliographic history difficult to trace. Books of tkhines originating in eighteenth-century Eastern Europe, especially in Galicia, Volhynia and Podolia (now parts of Poland, Belarus and the Ukraine), tended to deal with a smaller number of subjects (such as the High Holidays and the penitential season) and were often attributed to a single author. Because a significant number of these authors were women, these texts allow us, for the first time, to hear women’s voices directly. Important examples include: Tkhine imohes fun rosh khoydesh elul (“Tkhine of the Matriarchs for the New Moon of Elul” [and the entire penitential season]; Lviv, n.d.), by Serl daughter of Jacob ben Wolf Kranz (the famed Preacher of Dubno, 1741–1804), which calls on the four biblical matriarchs (Sarah, Rebecca, Rachel and Leah) to come to the aid of the worshipper and plead her case before the heavenly court; two short tkhines by Leah Dreyzl (mid-eighteenth century), daughter of Moses of Zolkiew and Nehamah Naytshe, which, in the form of powerful sermons, call on women to repent their sins; Tkhine imohes (“Tkhine of the [biblical] Matriarchs”) for the Sabbath before the New Moon, by Leah Horowitz (eighteenth century), which argues for the power of women’s prayer and quotes from rabbinic and kabbalistic sources; and Shloyshe sheorim (“The Three Gates”), attributed to Sarah bas Tovim (eighteenth century), which contains three sections: one for the three “women’s commandments,” one for the High Holidays and one for the Sabbath before the New Moon. These Eastern European authors have distinctive literary styles and concerns. In contrast to Western European texts, the tkhines of Leah Horowitz and Sarah bas Tovim suggest that women should take part—in some fashion—in such traditionally male activities as synagogue prayer and Torah study. Both of these authors also write of hopes, prayers and rituals to bring about the coming of the Messiah. Serl and Leah Dreyzl, by contrast, focus on the inner life and repentance of the individual woman.
By the mid-nineteenth century, the genre had undergone significant change. Jews in Central and Western Europe had largely abandoned Yiddish; books comparable to tkhines were published first in Germanized Yiddish, then in German in Yiddish characters and finally in German. However, these texts breathed an entirely new sensibility, influenced by the rising ideal of the bourgeois family, with its stress on sentiment and emotional family ties and its new definition of gender roles. The authors of these collections of prayers for women in German, published throughout the nineteenth century and into the twentieth, were almost all men. However, the most popular work of this type was Stunden der Andacht (“Hours of Devotion”) by Fanny Neuda. First published in 1855, this work went through at least twenty-eight editions up until the 1920s. Martha Wertheimer published a special, revised version in 1936, addressing the conditions of Nazi Germany.
In Eastern Europe as well, the ideal of the bourgeois family came into play in nineteenth century tkhines, but in a rather different fashion. Maskilim, or “enlighteners,” men who wished to reform Eastern European Jewish life, wrote tkhines to reach the “benighted” traditional women with their reform program. Unlike earlier tkhine authors, female or male, they scorned their audience and the genre and wrote the prayers in a highly emotional style they thought would appeal to their audience. Often, because they thought they could thus sell more books, they attributed their works to female authors, either those who had actually written tkhines a century earlier or creations of their own imagination. (Because the maskilic practice of using female pseudonyms was well known, earlier scholars were skeptical of any attributions to female authorship. However, as we have seen, many seventeenth and eighteenth century women authors can be authenticated.) Alongside these newer maskilic tkhines, older texts and collections, both those originally published in Western Europe and those originally published in Eastern Europe, continued to be reprinted in Eastern Europe in numerous editions up until the Holocaust, often revised or garbled by the printers. In the early decades of the twentieth century, tkhines in Yiddish were also published in North America and other areas to which Eastern European Jews migrated.
Significance. The tkhines reveal a whole world of women’s religious lives, concerns, customs and settings for prayer. These texts are deeply spiritual, no less than the complex and esoteric works produced by Kabbalists and hasidic masters. The women (and men) who composed these prayers for women addressed the spiritual issues of their day, whether on the level of domestic piety or national redemption. The tkhines themselves are at home in the literature produced for the intellectual “middle class” of this period; they belong among the guides to the upright life, books of customs, condensations of guides to pious practices, and digests of mystical teachings that were read by householders and artisans. Indeed, the tkhines show how much women were a part of this intellectual and spiritual world. Finally, the tkhines provide a directness of passionately emotional personal prayer, mostly absent from the more collective and formalized male worship experience.
Recent Developments. As the use of Yiddish declined among emigrants from Eastern Europe in the late nineteenth and the twentieth centuries, and the Yiddish-speaking heartland was destroyed by the Holocaust, the genre of tkhines nearly disappeared, except among hasidim and other isolated traditional Yiddish-speaking populations. Since the 1980s, however, the tkhines have aroused new interest in both scholars and members of the Jewish public. Jewish women, in particular, have sought to find a “usable past” in which to root themselves. Jewish feminists have found role models in the historical tkhines uncovered by scholars and have also written new tkhines in their current vernaculars, English or Hebrew. Some new tkhines (and translations of old tkhines) are included in recent Conservative and Reconstructionist prayer books, while others have been published in anthologies of contemporary women’s prayers and writings. Some Orthodox women have turned to the traditional tkhines in Yiddish as a direct expression of Jewish women’s spirituality. This is despite the fact that many young Orthodox women today do not know Yiddish and are well educated in the Hebrew prayer book and classical sources in Hebrew; the Yiddish tkhines are published with English translation.
Kay, Devra. Seyder Tkhines: The Forgotten Book of Common Prayer for Jewish Women. Philadelphia: 2004; Kratz-Ritter, Bettina. Für “fromme Zionstöchter” und “gebildete Frauenzimmer.” Hildesheim: 1995; Niger, Shmuel. “Di yidishe literatur un di lezerin.” In Bleter geshikhte fun der yiddisher literatur. New York: 1959 (first published 1912); Shulman, Eleazar. Sefat yehudit-ashkenazit ve-sifrutah. Riga: 1913; Weissler, Chava. Voices of the Matriarchs: Listening to the Prayers of Early Modern Jewish Women. Boston: 1998; Zinberg, Israel. A History of Jewish Literature. vol. 7. Cincinnati: 1975.
How to cite this page
Weissler, Chava. "Tkhines." Jewish Women: A Comprehensive Historical Encyclopedia. 20 March 2009. Jewish Women's Archive. (Viewed on December 2, 2016) <https://jwa.org/encyclopedia/article/tkhines>. | <urn:uuid:1e00c935-603a-4441-8687-9aa88cc35abe> | {
"dump": "CC-MAIN-2016-50",
"url": "https://jwa.org/encyclopedia/article/tkhines",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698540798.71/warc/CC-MAIN-20161202170900-00177-ip-10-31-129-80.ec2.internal.warc.gz",
"language": "en",
"language_score": 0.9660938382148743,
"token_count": 3778,
"score": 3.375,
"int_score": 3
} |
Sloop Island Canal Boat
UPDATE: In 2012, LCMM researchers returned to the Sloop Island Canal Boat with a scanning sonar unit from BlueView. Read about the exceptional imagery that was produced: Scanning Sonar of the Sloop Island Canal Boat
In the summer of 2002, the Maritime Research Institute undertook an intensive underwater archaeological study of a canal boat in the waters of Lake Champlain near Charlotte, Vermont. The "Sloop Island Canal boat" is an early twentieth century standard canal boat. To our knowledge, the Maritime Research Institute has begun conducting the most in-depth investigation of a canal boat ever undertaken.
Pine Street Barge Canal Superfund Site
We were given the opportunity to study the vessel as part of the environmental clean-up of the Pine Street Barge Canal Superfund Site in Burlington. This small canal was built for easier loading/unloading of canal boats in the 1860s as Burlington’s waterfront boomed with the shipment of millions of board feet of lumber. In 1895, a coal gasification plant, which produced manufactured gas from coal and oil, was established next to the canal. In the process of creating manufactured gas, the locally abundant wood chips were used as a filter. Waste products from this process included coal tar, fuel oil, tar-saturated wood chips, cinders, cyanide, and metals. The wastes were disposed in the wetlands around the canal. The legacy of this contamination is still with us today.
In 1983, Pine Street Barge Canal was put onto the National Priorities List as a Superfund site by the Environmental Protection Agency. The descendant companies of those that worked along the canal, Green Mountain Power among others and the current landowners, were charged with cleaning up the site. As part of this process, an archaeological study was conducted in the canal; it located five derelict canal boats.
All of the canal boats were of a type known as "Enlarged Erie-Class" built around 1900. The boats were deemed eligible for the National Register of Historic Places because of the unique information that they could tell us about the past. This presented a problem, because capping the contaminated area, part of the proposed environmental clean-up, would make the vessels inaccessible to future researchers. Studying the canal boats, however, was not possible due to contamination at the site.
Solution: Sloop Island Canal Boat Study
The solution to this problem presented itself in 1998 when LCMM’s Lake Survey Project located a canal boat of the same type as those five in the barge canal—except this one was in the broad lake. Given the designation Wreck Z, because it was the twenty-sixth shipwreck we had located, its significance in this pristine archaeological site was immediately apparent.
It was then formally proposed that instead of studying the canal boats in the Superfund site, researchers instead study this one in waters off Sloop Island. After much discussion and debate, the Vermont Division for Historic Preservation, the Environmental Protection Agency, Green Mountain Power, and LCMM, among others, agreed. It was a unique solution, a case study of sorts, for an otherwise difficult situation.
With all the details worked out, our dive team found itself with the enormous and marvelous task of studying this large, intact shipwreck. We focused on two main goals: documenting the vessel’s structure and recovering the artifacts from the cabin. A team of seven MRI archaeologists spent seven weeks capturing every last scrap of information.
The vessel, which rests below ninety feet of cold water, is in excellent condition. Its hull, with one single, large open cargo hatch, rises about ten feet proud off the bottom, presenting an impressive structure. The vessel still carries its last cargo—a load of coal. At the canal boat’s stern is the most interesting feature: the cabin. The cabin’s interior contained all of the items that a family of canalers needed as they made their way from port to port. The presence of cargo and numerous artifacts indicates the vessel sank unintentionally, capturing a complete look into the lives of a nearly forgotten group of people.
After hundreds of hours of library and archival research, we still have not been able to locate the name of the Sloop Island Canal Boat. However, the volume of artifacts recovered from the cabin and fo'c'sle have led to a number of clues to its use and sinking.
These findings have been published in a report that can be downloaded (available here - 31.3mb PDF), as well as a full color popular publication designed for educators that can be downloaded (available here - 1.47mb PDF) by anyone interested in this significant shipwreck
At the moment, many of the recovered and conserved artifacts are on display at LCMM's Basin Harbor facility. In addition, the Sloop Island Canal Boat has now been opened as part of Lake Champlain's Underwater Historic Preserve System.
Many agencies, organizations, and individuals came together to make this project possible. We are especially grateful to Green Mountain Power, National Grid USA Service Company, Vermont Gas, the Environmental Protection Agency, the Vermont Division for Historic Preservation, the Wings Point Association, Luther Bridgeman and family, and Waterfront Diving Center.
For more information visit the Environmental Protection Agency's pages on the Pine Street Canal, which contains the following documents as PDFs:
- Photodocumentation of Historic Canal Cribwork
- Pine Street Canal Historic Resources Survey | <urn:uuid:0ea5ee22-2043-4b71-861e-0b56545861e2> | {
"dump": "CC-MAIN-2016-30",
"url": "http://www.lcmm.org/mri/projects/si_canal_boat.htm",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469258948913.96/warc/CC-MAIN-20160723072908-00016-ip-10-185-27-174.ec2.internal.warc.gz",
"language": "en",
"language_score": 0.9558465480804443,
"token_count": 1115,
"score": 3.46875,
"int_score": 3
} |
Faculty Scholarship 1994 - Present
Examining the Effects of Technology Attributes on Learning: A contingency Perspective
In today?s knowledge economy, technology is utilized more than ever to deliver instructional material to the learner. Nonetheless, information may not always be presented in a manner that maximizes the learning experience, resulting in a negative impact on learning outcomes. Drawing on the Task-Technology Fit model, a research framework was developed to investigate the influence of vividness, interactivity, task complexity, and learning style on performance, satisfaction, interest, and perceived mental effort in the context of learning how to use an office productivity tool via a computer-mediated learning environment. It was hypothesized that vividness and interactivity would increase satisfaction and interest and that the affects of vividness and interactivity on performance and perceived mental effort would vary depending on the complexity of the task. It was also hypothesized that vividness and learning style would interact to influence performance and perceived mental effort when a task was more complex. A laboratory experiment was employed to test the research model. The experiment manipulated two levels of vividness, interactivity, and task complexity, resulting in six unique treatment conditions. In each of these treatment conditions, subjects viewed a computer-based tutorial on how to complete a task using a specific tool in Microsoft Excel. Subjects were then asked to complete a similar task using this same Excel tool. Overall, strong support was found in support of the hypotheses. Findings indicate that presenting information in a more vivid or more interactive learning environment will significantly increase satisfaction with the learning environment as well as interest in the topic. Furthermore, strong support was found for utilizing a more vivid or more interactive presentation to increase performance and reduce perceived mental effort when a task is more complex. Mixed support was found regarding the influence of vividness and learning style on performance and perceived mental effort for a more complex task. This research contributes to our theoretical understanding of instructional design and the influence of technology characteristics on learning outcomes. These findings also serve to guide those who design and disseminate information in computer-mediated contexts. Moreover, multimedia production is both expensive and time consuming and, as this study indicates, may not always enhance learning outcomes. | <urn:uuid:1cb5ab6e-8b93-40bb-a95c-fd3485aca3ad> | {
"dump": "CC-MAIN-2014-41",
"url": "http://www.rowan.edu/colleges/business/publications/webpages/abstract.cfm?pub_id=1011&person_id=237",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657131304.74/warc/CC-MAIN-20140914011211-00048-ip-10-196-40-205.us-west-1.compute.internal.warc.gz",
"language": "en",
"language_score": 0.9471256732940674,
"token_count": 437,
"score": 2.59375,
"int_score": 3
} |
Bumblebees will buzz around flowers throughout the day in a seemingly random manner.
But a study by British universities have discovered that in fact bees are constantly working out the quickest route to collect the most amount of food.
Despite having brains the “size of grass seeds”, bumblebees are able to calculate the most efficient route from flowers back to the nest.
The team from Queen Mary’s University and Royal Holloway University in London in London attached tiny antennae to tens of bees that pinged back the location of the insects as they foraged for pollen and nectar.
The results showed that the bees would try a number of different routes to a flower and between plants in order to work out the quickest way to and from a food source. Within hours or even minutes, the apparently random 'flight of the bumblebee' is an efficient and learned route.
The study, published in PLOS Biology, could help farmers work out the best way to grow crops so bees can pollinate them more easily. It could also help computer programmes to develop more efficient travel routes for humans.
Dr Nigel Raine, one of the authors of the study from RHU, said bees are performing quite a complicated 'computational task' for such as small creature with a tiny brain.
“Without the benefit of sat nav or GPS they can work out quickest way to do their job,” he said.
Meanwhile, new research in the journal Science claimed pesticides are not as bad for bees as previously claimed. | <urn:uuid:2cedea91-53e6-4658-8cd6-9dcdfb30b703> | {
"dump": "CC-MAIN-2013-20",
"url": "http://www.telegraph.co.uk/earth/earthnews/9555644/Scientists-solve-the-mystery-behind-the-flight-of-the-bumblebee.html",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706637439/warc/CC-MAIN-20130516121717-00059-ip-10-60-113-184.ec2.internal.warc.gz",
"language": "en",
"language_score": 0.9591447114944458,
"token_count": 314,
"score": 3.796875,
"int_score": 4
} |
When I mention the word "Nana," 17-month-old Jovie gets a huge grin on her face and points at my laptop – the place we see my mom during chats on Skype. And when 3-year-old Lily sees me using my phone, she demands to play games on it – Puzzingo or whatever free preschooler-friendly apps I have loaded. Both girls clap with excitement whenever "Sofia the First" comes on the Disney Channel.
I fall somewhere in the middle on the spectrum between thrilled and horrified about the amount of screen time the girls get on any given day – whether it's learning about dolphins on YouTube, seeing pictures of their cousins on my phone or watching cartoons on the TV.
And like me, I'm guessing most parents struggle with finding a balance in how much media their kids are exposed to, and have questions about the positive and negative side effects.
More and more research is being done to look at the effects of modern technology on young children. Not surprisingly, reviews are mixed.
Experts still recommend limiting screen time
With the explosion of smartphones, tablets and interactive learning apps, the American Academy of Pediatrics continues to recommend no screen time for children under the age of 2 and to limit screen time for children over the age of 2 to one to two hours a day of educational, nonviolent programming.
Studies (mostly related to non-interactive TV watching) have linked screen time to a myriad of negative outcomes:
- Attention deficit
- Behavioral problems
- Psychological difficulty
- Decreased physical activity
- Irregular sleep patterns
- Impaired academic performance
- Less time for imaginative play
New research suggests some technology can be positive
But in 2011 the National Association for the Education of Young Children and the Fred Rogers Center for Early Learning and Children's Media released a position paper (PDF) in which they defended the use of interactive media by children.
"When used wisely technology and media can support learning and relationships," the paper said. "With guidance, these various technology tools can be harnessed for learning and development; without guidance, usage can be inappropriate and/or interfere with learning and development."
Researches in the U.K. released a study this year examining the seven myths of about young children and technology (PDF). While observing more then 50 3- and 4-year-olds and their families, they found that contrary to popular belief, technology didn't necessarily hinder a child's social interactions. In fact, when parents and siblings watched a favorite program together, the show offered fodder for discussion as well as imaginative play later on.
"Our research suggests that technologies can expand the range of opportunities for children to learn about the world around them, to develop their communicative abilities and to learn to learn," Lydia Plowman and Joanna McPake write.
I think the key point here is that any screen time – whether it's watching "Sesame Street" or playing on a phone - needs to be active and shared. Rather than plopping my kids in front of the TV or handing them my phone and walking out of the room, I need to sit down and be a part of the activity – talking about the show or helping them navigate the app. And even then, the amount of time they spend with a screen should be limited in favor of more imaginative and physical play.
The Mayo Clinic offers some great tips for limiting your kids' screen time. (My personal favorite was to be a good example of that yourself – those little eyes are always watching!)
Set aside time for creative play
Finally, Nancy Carlsson-Paige, a professor emerita of education at Lesley University in Cambridge, Mass., and author of "Taking Back Childhood" wrote a thought-provoking post for the Washington Post last year questioning whether technology saps creativity in children:
Kids need first-hand engagement — they need to manipulate objects physically, engage all their senses, and move and interact with the 3-dimensional world. This is what maximizes their learning and brain development. A lot of the time children spend with screens takes time away from the activities we know they need for optimal growth. We know that children today are playing less than kids played in the past.
Researchers who have tracked children's creativity for 50 years are seeing a significant decrease in creativity among children for the first time, especially younger children from kindergarten through sixth grade. This decline in creativity is thought to be due at least in part to the decline of play.
It's frightening to think that all this technology can be limiting our children's development. At the same time, I feel it's helped them answer questions and learn more about the world. Lily knows how to yell like an ibex, jump like a dolphin and howl like wolf, all from watching videos on YouTube.
As with everything involved in parenting I suppose the solution to the technology dilemma is balance – don't let them sit passively in front of a screen for hours on end and when you do power on, make sure the content is more fruits and veggies and less empty calories. Small, healthy portions are best.
Except for when Nana is on the screen – because you can never have enough Nana. | <urn:uuid:506fcee3-a683-4c27-b827-c2dc44728eac> | {
"dump": "CC-MAIN-2015-40",
"url": "http://www.denverpost.com/kickinit/ci_24271240/why-kids-screen-time-isnt-matter-just-saying",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-40/segments/1443736682947.6/warc/CC-MAIN-20151001215802-00114-ip-10-137-6-227.ec2.internal.warc.gz",
"language": "en",
"language_score": 0.9478026032447815,
"token_count": 1063,
"score": 3.046875,
"int_score": 3
} |
Management Square | Sep 20, 2017 | 0
What Are The Organizational Structures Types?
Companies usually create organizational structures that define how activities, task delegations, coordination and supervision are monitored to achieve the organizational aims. Organizational structure also determines the roles and responsibilities are being distributed among the employees. The flow of information is also observed through organizational structure.
Organizational structures represent the hierarchy within the company. They show the arrangement of authority, communications, rights and duties of the parties involved. The structure solely depends on the objectives and strategies to be employed by the company. It follows a particular pattern where there are departments and teams.
Organizational Structures Types
A company may organize its structure into groups which are divided according to each product and geographical area as well. There are types of organizational structures to be observed in different companies.
Bureaucratic Structures represent strict hierarchies in the management team. Bureaucratic structure is sub divided into three:
- Pre- bureaucratic structures can be observed in small scale businesses and usually lacks standards. There is only one person who decides for the whole organization and the communication usually comes in one to one conversations.
- Bureaucratic structures usually have solid standards though relatively small compared to larger industries. The functional structure works very well for small businesses in which each department can rely on its employees.
- Post bureaucratic structures have tight hierarchies and standards but very much open to innovative ideas and trends. It has well-planned methodologies that resulted from systematic techniques and strategies.
In functional structure, the whole organization is grouped according to its purpose. Each department is allowed to create and navigate its own resources to come up with decisions for the attainment of company objectives. It promotes great efficiency in each department and makes the management easier to conduct. Functional structure is strongly recommended for large corporations that produce high quantity of products but at low costs. However, one of the drawbacks of this structure is lack of coordination and communication between departments because of boundaries built by working separately.
This type of structure is applicable to larger companies that operate in a wide geographic area or have sub- organizations. It covers different types of markets and products thus a need to create divisions to take care of it is highly needed. One of the benefits of this structure is it handles specific needs to be addressed at a specific time. These needs can be met rapidly and more specifically. On the other hand, this type of structure costs a lot and may be expensive due to its size and scope. Further, the communication is quite limited because of the proximity issues.
The Matrix structure is the hybrid of functional and divisional structures. This type of structure is used in multinational companies and it allows experiencing the benefits of both functional and divisional structure at the same time. The company creates teams responsible for completing assigned tasks. However, this type of structure can create power struggles because most areas of the company will have dual management.
Every company should have organizational structure that fits its goals and vision. It should cover the management of resources and manpower. The type of organizational structures may help or hinder the company’s progress towards accomplishing its goals. Therefore, a thorough understanding of what organizational structure to use is a must.
Subscribe To Our Newsletter
Join our mailing list to receive the latest news and updates from our team. | <urn:uuid:a6d850b2-94e2-4cd5-921c-b057cd57f8e5> | {
"dump": "CC-MAIN-2023-50",
"url": "https://www.project-management.pm/what-are-the-organizational-structures-types/",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100112.41/warc/CC-MAIN-20231129141108-20231129171108-00516.warc.gz",
"language": "en",
"language_score": 0.9495611786842346,
"token_count": 670,
"score": 3.15625,
"int_score": 3
} |
In light of recent news here in London, this past week, when more than 6 young men were stabbed at Hackney central station and in the capital, I want to lend my professional perspective on the subject, raise awareness to the conditions, stigmas, and suggest possible resolutions to help decrease the bloodshed. the shock of this largely preventable condition.
Here are just a few staggering statistics, keeping in mind that behind every statistic is a human being.
Most parts of the world are safer than ever before, and at the same time, others record the most violence that human history has ever seen.
- Violence causes more than 1.6 million deaths worldwide every year
- 2% of all human deaths have been the result of us killing each other,
while many are inured by the presence of violence.
- Homicide is the 5th most common cause of death in all age brackets
between 1 to 44 years old.
- 9 out of 10 people in prison for violent crimes are men.
- Men die from homicide 3X the rate of women.
Defining Violence and Abuse
Violence and other forms of abuse are most commonly understood as patterns of behaviour intended to establish and maintain control over family, household members, intimate partners, colleagues, individuals, communities or groups. While violent offenders are most often know their victims (intimate or estranged partners and spouses, family members, relatives, peers, colleagues, etc.), acts of violence and abuse may also be committed by strangers.
- 35% of women worldwide reported experiencing violence in their lifetime, whether physical, sexual, or both.
- 1 in 10 girls under the age of 18 was forced to have sex.
- 38% of women who are murdered are killed by their partners.
Violence and abuse may occur only once, can involve various tactics of subtle manipulation or may occur frequently while escalating over a period of months or years.
Violence and abuse are used to establish and maintain power and control over another person, and often reflect an imbalance of power between the victim and the abuser. Fear and the struggle for control and power being at the root of most forms of violence.
General Forms of Violence and Abuse:
- Verbal Abuse
- Financial Abuse
We are left to ask Why? We cannot hope to control violence if we are bewildered by it.
Mixing together all forms of violence in an attempt to find a common denominator has created the struggle to comprehend a school shooting, or worshipers slaughtered inside a church, or violence committed in a robbery, or the everyday violence of drunken brawls, domestic violence, or deadly road rage. Each of these violent acts can be understood as specific behaviors that are controlled, as are all behaviors.
Patients who are violent are not a ‘homogenous group’, and their violence reflects various biologic, psychodynamic, and social factors. Most researchers and clinicians agree that a combination of factors plays a role in violence and aggression, although there are differing opinions regarding the importance of individual factors.
Like most primates, humans evolved to be violent to one another. The biology of anger and aggression is the root cause of most violent behavior. While lethal violence may be part of our individual genetic disposition, it can be said it’s mostly governed by the evolution of our societies and the groups within them. We are on the brink of a new understanding of the neuroscience of violence. “We cannot change the biology of our brain, but if we choose to, we can comprehend it at the same level of detail that we comprehend the biology of a human heartbeat.”
Most of the time the neural circuits of aggression are life-saving, as when a mother instantly reacts aggressively to protect her child in danger, but sometimes they misfire and violence explodes inappropriately.
Triggers of Rage. The pressures of modern life.
High-speed transportation and communication increase opportunities for conflict among different groups of people, while access to ‘weapons of violence amplify the lethal effects of even one enraged mind.’
Hackney mayoral candidate Pauline Pearce said the recent stabbings and shootings were partly a result the children feeling disenfranchised. “They don’t feel they belong, they haven’t really got a meaning – they don’t feel that they have that connection to society, so a lot of things go wrong for them and sadly this is the sort of retaliation that comes.” This observation lends to the notion that the root of most forms of violence are founded in the many types of inequality which continue to exist and grow in society. The rates of violence have been seen to increase with lower education, less social stability, and in regions with high rates of unemployment. Others view the incidents of violence as a learned behavior associated with social norms and external factors, an example in the music which glorifies the behavior.
Whether you view the incidents of violence as genetic or socio factors, the evidence is there that the human brain is struggling to cope with an environment it was never designed to confront.
What about violence and mental health?
Violence in the context of mental illness has been sensationalized, increasing the stigma around the mental health community. The perception brings dire consequences for many mentally challenged patients, increasing discrimination and causing isolation from society, aiding in homelessness.
Violence has serious implications for society and the mental health community. We as a society need to view these incidents on a broader scale and conclude from information provided by ample research that individuals with mental illness, when appropriately treated and cared for, do not pose any increased risk of violence over the general population. That being said, I stress the need for appropriate treatment of such individuals. We can no longer let the mentally ill fall through the cracks or remain in this cycle of treatment-relapse-treatment and being inappropriately cared for.
Mental Health Factors.
- Threatened Security
- History of Abuse
- Perceived Inequities
What can be done about it?
It can be said that violence is not driven by reason, it is driven by rage, yet violence is a choice, and therefore can be preventable.
So where do we turn our attention?
As we look to protect our communities by decreasing homicides by shootings and beyond, we need to increase education to amplify awareness for all communities to support their neighbors by helping to achieve a higher quality of life as protection from feeling vulnerable and disenfranchised by their environments, including the mentally ill.
Who has the Answers?
Is the answer in educating protest organizers like G.A.N.G.? The protest organisers: Guiding a New Generation (G.A.N.G.) shared their stories and pleaded with residence for an end to the killings experienced last month in London. Lead organiser said, “We are trying to guide these children to let them know that their life is not going in the right direction. I want to say to them this is not the life… they are being sold a false narrative – and we are here to change that narrative for them.”
Fortunately, the financial evidence exists, where research has indicated that investing early to prevent conflicts from escalating into violent crises is 60X more cost effective than intervening after violence erupts. Therefore allocating funding toward prevention measures, programs and trainings can yield positive results toward making a change in the global violence statistics.
Examples of Studies in Schools.
Life Skills Training showed a 42% reduction in physical and verbal youth violence as well as a witnessed 40% reduction in psychological distress, including stress, anxiety and depression.
Mindfulness Practices. With the implementation of a meditation/ mindfulness practice called “Quiet Time,” some of the toughest schools are reporting suspensions decreased by 79%, attendance increased to over 98%, and academic performance noticeably increasing.
I encourage you to stay connected and deepen your community care connections. Find out your viable resources. Reach out to me. Sign up for our monthly newsletter, to be notified of our upcoming bogs, events and more. Know you are surrounded by caring professionals and neighbors who want to come to your aid and the betterment of your surrounding communities and beyond.
It is my hope that this blog aids in the recognition and management of dangerous behaviors and minimize risk to communities across generations to come.
In good mental health,
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2686644/ https://www.standard.co.uk/news/london/protesters-call-for-end-to-londons- surge-in-bloodshed-as-six-more-young-men-stabbed-within-90- a3807106.html | <urn:uuid:cfec04d0-0f8a-42b7-879f-190f9f6bec5c> | {
"dump": "CC-MAIN-2020-05",
"url": "https://www.noelmcdermott.net/violence-safer-together/",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250626449.79/warc/CC-MAIN-20200124221147-20200125010147-00093.warc.gz",
"language": "en",
"language_score": 0.9449756145477295,
"token_count": 1784,
"score": 3,
"int_score": 3
} |
The next few months will be a test of China’s resolve to improve the environment.
China needs to sharply slowdown economic growth if it wants to reach its energy efficiency targets by the end of the year, according to a report by Standard Chartered Bank.
China’s latest five-year plan calls for a 20% reduction in the amount of fuel used per dollar of economic output from 2005 levels by the end of 2010. To achieve that, the report says, China would have to use 6% less electricity per month from September to December than its average consumption in the first eight months of the year. That implies the growth rate of industrial output would fall by half in the second half of the year to 7.4%, according to the bank’s calculations.
China was making good progress on energy efficiency until 2009, when the country’s infrastructure-heavy stimulus package – implemented as a response to the financial crisis – led to surges in cement, steel and other energy-intensive industries, the report says.
Beijing was lauded for setting ambitious energy targets when the five-year plan was released in 2006, but pressure to hit those targets has caused turmoil as local governments struggle to reach their quotas. With the end-of-year deadline looming, there’s no time to install efficiency upgrades. Instead, officials are just rationing electricity, with one city in Hebei Province going so far as to squeeze power supplies to hospitals and shut down any traffic light not powered by solar cells.
In the wake of the uninspiring outcome over the weekend of UN climate talks in Tianjin – which ended with the U.S. saying China needs to step up, and China likening the U.S. to a preening pig – there’s going to be more focus on individual countries’ voluntary actions.
While it still has relatively low per-capita carbon emissions, China is the world’s biggest source of greenhouse gasses, most of which come from the burning of coal to power the economy.
In a report on Friday, China’s official Xinhua news agency quoted Minister of Industry and Information Technology Li Yizhong as saying China would meet its energy intensity target this year, mostly by pushing through efficiency reforms in the industrial sector. The report made no mention of the effects of those reforms on GDP.
What matters more to Chinese leaders? Cutting pollution or maintaining economic growth?
With the deadline approaching, we’ll have an answer soon enough. | <urn:uuid:98ab2e19-57ec-405d-9a5b-a647a1837ef2> | {
"dump": "CC-MAIN-2015-18",
"url": "http://blogs.wsj.com/chinarealtime/2010/10/12/turn-the-lights-out/",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246640124.22/warc/CC-MAIN-20150417045720-00002-ip-10-235-10-82.ec2.internal.warc.gz",
"language": "en",
"language_score": 0.9566690921783447,
"token_count": 514,
"score": 2.625,
"int_score": 3
} |
Water Quality Trading Glossary
Best Management Practices (BMP): Practices are designed to conserve soil and water resources used in farming and to lessen environmental damage from pollution sources, like runoff or erosion management systems at a construction site or timber stand, animal waste storage systems at a farm, or buffer strips along riparian zones.
“Cap and Trade” Approach: Cap and trade programs set a limit on pollutants from all point source polluters, distribute credits to these polluters and allow the polluters to meet their limit however they see fit (“capping the system”). This capping creates markets for trading the excess credits of those who excel at cutting pollution. Under such a system, for example, rather than install expensive equipment to meet its water pollution limit, a factory or treatment facility may find it cheaper to buy excess credits from producers who have cut more than their allotted share of pollution through improved land best management practices (BMPs).
Clean Water Act (CWA): The CWA establishes a regulatory framework to protect water quality throughout the United States. The goal is to “restore and maintain the chemical, physical, and biological integrity of the Nation’s waters (U.S.C. 1251-1387).”
“Command and Control”: This statement refers to how regulation historically treats point sources – “commanding” them to “control” pollution in a specific way. Under the command and control style of regulation, the U.S. EPA requires every point source to use the same control equipment and the same methods for reducing pollution, no matter how much they pollute or how much installation costs.
Current Load (CL): Pollutant discharge from any source under current management practices.
Delivery ratio: The ratio of contaminant yield from a watershed to the portion that reaches the receptor point (i.e., 1 kg of phosphorus from one location is not equal to 1 kg of phosphorus from another).
Discharge: Discharge is defined by the Clean Water Act as the addition of any pollutant (including animal manure or contaminated waters) to navigable waters. Navigable waters are broadly defined as any surface water source, whether in man-made ditches or natural streams, that leave an operator's property.
Hot spots: Highly degraded localized areas in a watershed.
Impaired Water Body: An impaired water body is one that is polluted. A state’s TMDL “Impaired Waters List” is a list of the state’s waters that fail or are threatened to fail the state’s water quality standards, even after the installation of pollutant controls. These lists are also referred to as “TMDL Lists.”
Load Allocation (LA): The LA is the portion of the allowable pollutant discharge attributed to existing and future nonpoint sources.
Margin of Safety (MOS): Arequired component of TMDL development designed to account for uncertainty in load and waste load allocation calculations.
National Pollution Discharge Elimination System (NPDES) Permit: A NPDES permit is a pollution discharge permit issued, pursuant to the Clean Water Act, by a state agency or by the U.S. EPA to a “point source” discharger. The permit specifies how much of a given pollutant can be present in a discharge and establishes monitoring and reporting requirements for that point source.
Nonpoint source pollution (NPS): Pollution that is diffuse, entering a waterway from a wide geographic area rather than a single pipe. Examples include polluted runoff from urban streets, agricultural fields, timber harvesting areas, airborne pollution, and contaminated sediment.
Point source pollution (PS): Pollution caused by a discharge of waste via a pipe. Examples include discharge from municipal wastewater treatment facilities and industries. Most sources are required to have permits with conditions designed to control discharges.
Receptor point: The location point for measuring the pollutant load or concentration in a water body.
Total Maximum Daily Load (TMDL): A watershed cleanup program, required by the Clean Water Act under Section 303(d), designed to deal with problem pollutants from all sources, including point and nonpoint sources. This program is important for nonpoint source controls in particular because of the absence of other mandatory control mechanisms under federal law. Under this provision, states are required to identify waters that are polluted even after all mandated controls have been applied. States must then develop watershed cleanup plans called “TMDLs.” In order for the U.S. EPA to approve a proposed TMDL, the state must demonstrate that there is a “reasonable assurance” that the controls-on nonpoint and point sources alike-can be achieved.
Target load (TL): Pollutant concentration or load allowed determined by regulation.
Total Phosphorous (TP): Total phosphorus is all of the phosphorus found in a water sample. Phosphorus exists in water in either a particulate phase or a dissolved phase. Phosphorus in natural waters is usually found in the form of phosphates (PO 4 -3). Phosphates can be in inorganic form or organic form. The U.S. EPA recommendations total phosphate should not exceed 0.05 mg/L (as phosphorus) in a stream at a point where it enters a lake or reservoir.
Total Suspended Solids (TSS): Total Suspended Solids (TSS) are solids suspended or dissolved in water that can be trapped by a filter. TSS can include a wide variety of material, such as silt, decaying plant and animal matter, industrial wastes, and sewage. High concentrations of suspended solids can cause many problems for stream health and aquatic life.
Trading ratio: This ratio is used to account for the uncertainty regarding the effectiveness of nonpoint source controls. It is applied in trades among point and nonpoint sources. A trading ratio of 3:1 means that for every one unit increase of pollutant from a point source, there must be a corresponding three unit decrease of that pollutant form a nonpoint source.
Waste Load Allocation (WLA): The portion of the allowable pollutant discharge assigned to each existing and future point sources.
Watershed: The geographic region from which water drains into a particular water body, like a bay, river, or lake. The watershed includes the land resources as well as the water body. Also called a drainage basin.
Water Quality Trading (WQT): Pollution sources in a watershed can face different costs to control the same pollutant. Water quality trading is an approach to achieve water quality goals more efficiently by allowing facilities to meet regulatory obligations by purchasing environmentally equivalent (or superior) pollution reductions from another source at a lower cost. The result is achieving the same water quality improvement at a lower overall cost. | <urn:uuid:9affd4a3-9462-4ba7-bad2-480b44af789a> | {
"dump": "CC-MAIN-2018-22",
"url": "http://bearriverinfo.org/water-quality-trading/water-quality-trading-glossary",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794865450.42/warc/CC-MAIN-20180523043959-20180523063959-00612.warc.gz",
"language": "en",
"language_score": 0.9118558168411255,
"token_count": 1411,
"score": 3.484375,
"int_score": 3
} |
Students at Watchung Hills Regional High School in New Jersey were fascinated when they heard about an Orange Out Against Bullying in Marshalltown, Iowa. When they got together, they decided to create their own "White-Out to Erase Bullying" event. The campaign took on the flavor of their community. Even the weather cooperated, blanketing the town with snow as high school leaders tied white ribbons on snow-laden trees and students led activities pledging not to be silent in the face of bullying at their high schools, middle schools and elementary schools. Even the mayor and city council members joined the effort.
Closed captioning available for this film. To turn on closed captioning, hit play and go to the bottom right-hand corner and click "CC."
This film is a great way to spark discussion as part of a schoolwide campaign. Click here to get the Not In Our School Anti-Bullying Campaign Quick Start Guide. | <urn:uuid:7edc4775-b0b1-419f-b641-c61f32e1eee9> | {
"dump": "CC-MAIN-2014-35",
"url": "http://www.niot.org/nios-video/white-out-erase-bullying",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500811391.43/warc/CC-MAIN-20140820021331-00033-ip-10-180-136-8.ec2.internal.warc.gz",
"language": "en",
"language_score": 0.970963180065155,
"token_count": 192,
"score": 3.0625,
"int_score": 3
} |
This semester, nearly one hundred Biology 111 students are studying the micro-aquatic ecosystems of Knoxville-area water samples and blogging as they go. Hannah Barry is just one of the students honing her science writing skills by describing what she sees happening to microorganisms as the aquatic environment changes.
Each spring, hundreds of pilgrims from across the country and around the world, descend upon the Great Smoky Mountains National Park to experience and celebrate the remarkable views in what is known as the Spring Wildflower Pilgrimage. In 1951, the year of the first annual pilgrimage, visitors atop Clingmans Dome, the highest point in the Great Smoky Mountains National Park, could have seen rich green hillsides and a view that stretched for 100 miles.
“Would it be feasible to promote some sort of a spring flower jubilee?” It was that simple question, posed 60 years ago, that birthed an event that now attracts people from all over the country and the world to the Great Smoky Mountains every year for the Spring Wildflower Pilgrimage, being held this year April 21 through 25.
Three graduate students in the College of Arts and Sciences at UT Knoxville are recipients of the 2010 National Science Foundation’s Graduate Research Fellowship. The NSF awards are given to students based on their potential as young scientists and for intellectual merit and broader impact. The fellowships are used to further their research.
Sixty years ago it was just a seed of an idea inside Bart Leiper’s head — a celebration of the Great Smoky Mountains National Park. Leiper, general manager of Gatlinburg’s Chamber of Commerce, wrote Samuel Meyer, then head of the botany department at UT Knoxville, requesting the department to arrange a so-called spring flower jubilee. Seeing the opportunity to turn the park into a giant outdoor classroom for students, botanists and nature-lovers alike, Meyer recruited professors Fred Norris and Royal Shanks to organize the first ever Spring Wildflower Pilgrimage in the Smokies.
Every spring for the past 59 years, hundreds of nature lovers from all over the world have descended upon the Great Smoky Mountains as part of the Spring Wildflower Pilgrimage. The event, which began with botanists from UT Knoxville, now involves as many as 1,000 participants. | <urn:uuid:6d3912a3-9d3d-4e26-a5eb-3528118645ad> | {
"dump": "CC-MAIN-2016-30",
"url": "http://tntoday.utk.edu/tag/biology/",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469258948335.92/warc/CC-MAIN-20160723072908-00311-ip-10-185-27-174.ec2.internal.warc.gz",
"language": "en",
"language_score": 0.9334364533424377,
"token_count": 481,
"score": 2.640625,
"int_score": 3
} |
< Back to front page Text size – +
Posted by Dr. Claire McCarthy September 3, 2012 07:26 AM
There's a scary new study showing that obesity can hurt kids' brains.
It's not news that obesity is bad for kids. It increases their risk of diabetes, cardiovascular disease, orthopedic problems and a whole bunch of other health problems. But what this study in the journal Pediatrics is talking about is different: it's talking about effects on the brain.
Researchers looked at 49 adolescents with metabolic syndrome. Metabolic syndrome, a consequence of obesity, is the triad of insulin resistance (pre-diabetes or diabetes), high blood pressure and high blood lipids. The researchers compared the adolescents with 62 adolescents who had the same socioeconomic background but didn't have metabolic syndrome.
The kids with metabolic syndrome had more trouble with arithmetic, spelling, attention and mental "flexibility" than the ones who didn't have metabolic syndrome. Even more frightening, the researchers saw actual changes in their brains, in the hippocampus (which plays a crucial role in memory) and the white matter (which passes messages through the brain).
It was only a small study, and not all kids with obesity have metabolic syndrome. But this study is alarming--especially since we don't know if losing weight can make the brain go back to normal. Given that brains are still developing in adolescence, it's very possible that the changes could be permanent.
What else do we need before we take the problem of childhood obesity really seriously? More and more, it is becoming clear that obesity can steal a child's future away.
In another study in the same edition of Pediatrics, German researchers looked at all the risk factors for childhood obesity and calculated which had the largest effects. You know what the two biggest factors were? Parental obesity and media time. If we tackle those two, it would have a bigger effect than getting kids to exercise or eat fruits and vegetables, they say. So as we start out this new school year, let's shut off the television and video games--and parents, when you are buying back-to-school shoes for the kids, pick up a pair of sneakers for yourself.
Let's work together to get our children's future back.
The author is solely responsible for the content.
About MD MamaClaire McCarthy, M.D., is a pediatrician and Medical Communications Editor at Boston Children's Hospital . An assistant professor of pediatrics at Harvard Medical School and a senior editor for Harvard More »
Recent blog posts
[an error occurred while processing this directive] | <urn:uuid:1fe103f3-6d2f-45ce-9e48-52dc6e9594d1> | {
"dump": "CC-MAIN-2017-04",
"url": "http://archive.boston.com/lifestyle/health/mdmama/2012/09/obesity_is_bad_for_kids_brains.html",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282631.80/warc/CC-MAIN-20170116095122-00418-ip-10-171-10-70.ec2.internal.warc.gz",
"language": "en",
"language_score": 0.9536923170089722,
"token_count": 526,
"score": 2.984375,
"int_score": 3
} |
You will investigate heating curves for different rates of heating and different substances.
After completing this tutorial, you will be able to complete the following:
All matter is composed of atoms or molecules that are in constant motion. The average kinetic energy of the particles in an object is the measure we call temperature. Because of the constant motion, all particles have heat, or thermal energy. When heat is being transferred away from an object, the particles move slower and slower. When a substance gains heat, the particles move faster and faster. An object that gains heat may undergo a phase change. When a substance is in a phase change-melting or boiling-it is gaining heat energy. All of the energy in the substance at that temperature is used to overcome the attractive forces of the particles, and therefore, the temperature remains constant during the phase change. When the attractive forces are overcome, the energy is used to increase the temperature of the particles.
In this Activity Object, students investigate heating curves. During two experiments, students heat 100-gram samples of sulfur and paraffin at three different levels of heat. They observe that when there is a phase change of a substance, such as melting or boiling, the temperature remains constant, and when the substance is not changing phase, it is increasing in temperature. As heating curves are graphed, it is easy to see that the rate of heating a substance does not change the temperatures at which those substances start to melt and boil, but the time required for a phase change is affected. Additionally, every substance has its own boiling point and melting point because they are characteristic properties of matter.
|Approximate Time||20 Minutes|
|Pre-requisite Concepts||Learners should be familiar with boiling, melting and states of matter.|
|Type of Tutorial||Experiment|
|Key Vocabulary||boiling point, change of state, condensation| | <urn:uuid:a2328b4e-158c-40af-935f-929116bb8356> | {
"dump": "CC-MAIN-2018-39",
"url": "http://www.uzinggo.com/melting-boiling-points-heating-curves/heating-curves/chemistry",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267156314.26/warc/CC-MAIN-20180919235858-20180920015858-00159.warc.gz",
"language": "en",
"language_score": 0.9159876704216003,
"token_count": 385,
"score": 4.625,
"int_score": 5
} |
by Scott Bestul
Only 2 out of every 10 newborn whitetails will survive a year in northern Wisconsin, if one year of research done by Badger State scientists proves a trend. DNR researchers captured and radio-collared 30 fawns in the spring of 2011; by April of this year only 6 were still alive.
Predation was the leading cause of mortality, accounting for 15 dead fawns (bears killed 5, “unknown predators” took another five, bobcats nailed 4 fawns, coyotes got one, and “unidentified canid” took another). Hunters, poachers, vehicle collisions and another “unknown” rounded out the list.
This 80% fawn mortality rate is only the beginning of data the WDNR is gleaning from a multi-year mortality study. In the winter of 2010, researchers began capturing adult deer in two study areas; one located in the heavily-forested northern part of the state, the other in central Wisconsin, where agriculture is more prevalent. Fawns were captured and collared beginning in the spring of 2011. According to the Milwaukee Journal-Sentinel, the $2 million study is being funded by the Federal Aid in Wildlife Restoration Fund.
The tough life of a north woods fawn stands in contrast to those collared in the central Wisconsin study area, where 27 of 48 fawns collared in 2011 had survived to this spring, for a survival rate of 56%. There, a fawn was as likely to be killed by a vehicle (six deaths) as eaten by a predator (six deaths, four attributed to coyotes). Natural causes accounted for the rest of fawn mortality sources.
WDNR researchers are also examining mortality factors for adult deer, and I look forward to seeing more data on that as it’s accumulated. I also applaud the agency for taking on this study. It will not only provide much-needed information, but it has also involved citizen input and cooperation. I know two landowners who have allowed deer to be captured on their land in the central study area, and another pair of young women who assisted researchers with fawn captures in the north woods this spring. Wisconsin residents love their whitetails, and it’s great to see the DNR engaging them in this important research project. | <urn:uuid:ae8f69e9-25a3-4c06-afe5-2fa3836935da> | {
"dump": "CC-MAIN-2014-10",
"url": "http://blog.mysanantonio.com/racksnreels/2012/07/09/study-mortality-rate-high-for-wi-big-woods-fawns/",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394678692158/warc/CC-MAIN-20140313024452-00096-ip-10-183-142-35.ec2.internal.warc.gz",
"language": "en",
"language_score": 0.9615717530250549,
"token_count": 477,
"score": 2.75,
"int_score": 3
} |
Jane Austen is a British writer, who is often referred to as the woman who has shaped the way modern novels look like. She was the one to introduce ordinary life of regular people into her books. Despite the fact that Austen works were widely read and positively acclaimed during her lifetime, the author still published them anonymously signed “By a Lady” due to sensitive nature of the events she portrayed in her novels.
Jane Austen wrote 6 major works that all focus on the role of women in the society and their dependence on men. She herself was never married, she never had her own house or let alone room, but her social life was intense and she was full of romantic dreams. During her lifetime she knew many men, especially the middle-class landlords, with whom she shared the passion for literature and irony. Jane Austen would later depict many of these men in great details in her texts.
Jane Austen was born on December 16, 1775, in a small town called Steventon where her father was a rector at the Anglican church. She was the 7th child and a second daughter in the family. Most of her childhood and adolescence years Jane spent in Hampshire. Despite the fact that her family didn't have that much money for schooling, Jane and her sister attended Oxford. In a little over a year, she had to go home due to sickness. Most of her education Austen received at home. Despite being home educated, Jane still received a far better set of knowledge and skills than other females of her time.
It was thanks to the home education that Jane received from her father that the girl developed a good taste for literature and grew to love reading classics. Jane started writing at the age of 14 years old, mostly letters on small pieces of paper where she described the events and people she saw around her. The family moved to Bath when she was 25 years old, and for some time Jane stopped writing due to homesickness missing her friends back in Steventon.
George Austen, Jane’s father, actively promoted his daughter’s writing hobby, buying her paper and a writing desk. He even tried to find her a publisher. After his death in 1805, Jane was left to live with her sister Cassandra and their mother in a quite difficult financial situation. From this time the three ladies kept moving around visiting their brothers' families and staying with them for a while. The brothers also agreed to put aside a certain annual allowance to help their sisters and the mother. Jane experienced herself what it felt like to depend on a man for her living.
Jane Austen’s novels are mostly centered on provincial life and morals of the British of her times. The psychological aspects are always especially accentuated. At the same time, the author devotes almost no time to the appearance of her characters, their clothing or the decoration of their houses. The reader will have a hard time finding landscape descriptions there but will have enough of dialogues to read through. It is hard to imagine that the silent passions of the feelings described in these books were written during times of Napoleonic wars and African colonization conquers.
The writing style of the author is simple and clear. She avoided complex sentences, multiple meanings, and overly poetic language. Jane would spend hours editing her texts trying to achieve a very laconic concision. Her signature style is a fine irony that penetrates each line of the texts. Sometimes this irony reaches a form of grotesque in the portraits of snobs, hypocrites, and loafers depicted in the novels.
Jane’s writing can be logically divided into two periods. The earlier period begins at the end of the 1790s and comprises such works as “Northanger Abbey” (published in 1818, completed by the author in 1803) – it is a parody on the Gothic novels so popular at that time. The earlier period of Austen’s writing style also often refers to the first version of “Sense and Sensibility” (1811) and “Pride and Prejudice” (1813). The author then heavily reworked the latter two novels. The later period of Jane Austen writing comprises of three later novels: “Emma” (1815), “Mansfield Park” (1814) and “Persuasion” (1817).
“Sense and Sensibility” (1811) is a story about two impoverished Dashwood sisters, Marianne and Elinor, who are trying to find suitable matches that would help improve their financial situation. The heroines, so different in their characters, are set to enter marriage whatever the cost of it.
In all her novels, Jane Austen’s protagonists do get married in the very end. “Pride and Prejudice” (1813) demonstrates a clash between an independent and smart Elizabeth Bennet and a rich, self-assured landlord Fitzwilliam Darcy. After initial rejection, the couple finds a way to build a family. At the beginning, Jane Austen named a version of this story “First Impressions”.
“Emma” (1815) was written using a playful modality. This is the story of Emma Woodhouse who finally found her happiness in marriage. The protagonist is a rich and kind young lady who spends her time making matches for people she knows. She finally makes her own match in the person she didn’t even dare to consider for a love affair before.
In her works, Jane Austen depicts the life of middle-class Brits with understanding and a bit of humor. She describes middle-sized landlords who depended on their income from land, small village pastors, and their families. In all these circumstances the social, economic and personal position of a woman depended on her ability to arrange a marriage with the right man.
Jane Austen died on July 18, 1817 of Addison’s disease being only 41 years of age. Before that, she started writing her last novel “Sanditon” (1925) but managed to get through only 12 chapters of it. Her brother, Henry Austen, was the first to identify his sister as the author of her novels. | <urn:uuid:7dac191e-2b1f-4860-8137-abad57113e08> | {
"dump": "CC-MAIN-2020-29",
"url": "https://jgdb.com/biography/author-jane-austen",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593657172545.84/warc/CC-MAIN-20200716153247-20200716183247-00224.warc.gz",
"language": "en",
"language_score": 0.9831116199493408,
"token_count": 1291,
"score": 3.578125,
"int_score": 4
} |
Okay, okay, okay… What do you get when you cross 6 West Linn High School Environmental Science Classes, an eroding creek bank, and 3 beautifully sunny days?
Well looking back at the last 3 days, apparently the answer is: a feat of bioengineering and the beginnings of a restored riparian zone. We warned you that this wasn’t funny [see post title] despite the fact that one tends not to be so dry when working around streams… Dang it! It appears that hanging out with the punsters at West Linn High School does rub off on you.
Wordplay aside, the students at West Linn got a ton of work done in the last three days. At this site West Linn went through all the steps of stabilizing an eroding bank with bioengineering. Before the students arrived, an earth mover had sloped the streambank to be a gradual slope as opposed to a steep cliff of dirt, but in order to prevent further erosion, a lot of work needed to be done with bioengineering techniques, to stabilize the bank in the long-term. On the first day students arrived, we placed fascine bundles of willow and dogwood in trenches, anchored them down and buried them. These will grow into new trees and will create a nice matrix of roots that will hold in the soil and prevent sediment from washing into the stream. On the second day, students spread out straw, rolled out a carpet of natural fiber fabric, and staked it down. This will keep soil from washing in the stream in the short-term as we wait for those fascines bundles to take root and grow. On the final day, students planted and staked over 300 trees and shrubs to provide a nice diverse community of plants for the riparian zone. For more specifics and better pictures on how this is accomplished check out this previous posting here.
West Linn did a wonderful job, and we look forward to having them back at Abernethy! | <urn:uuid:bf70cd61-7039-4c19-9a36-c7690bc585b9> | {
"dump": "CC-MAIN-2018-26",
"url": "https://solvgreenteam.wordpress.com/2011/11/04/stream-bank-restoration-at-abernethy-creek-is-no-joke/",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267860041.64/warc/CC-MAIN-20180618031628-20180618051628-00264.warc.gz",
"language": "en",
"language_score": 0.9460902214050293,
"token_count": 406,
"score": 2.78125,
"int_score": 3
} |
Andrea Amati (Italian, ca. 1505–1578)
Violin, ca. 1559
Spruce, maple, ebony; 7 15/16 x 22 5/8 in. (20.2 x 57.5 cm)
The Metropolitan Museum of Art, New York, Purchase, Robert Alonzo Lehman Bequest, 1999 (1999.26)
Andrea Amati developed the modern form of the violin in Cremona by the mid-sixteenth century. He was the first member of the famous Cremonese school of lutherie, which included several generations of his own family, the Guarneris, and Antonio Stradivari. Other traditions of lutherie also developed in northern Italian cities such as Brescia and Milan.
This decorated violin bears the Latin motto Quo unico propugnaculo stat stabitque religio (By this bulwark alone religion stands and will stand) on its ribs and has the remains of decoration on its back, including fleurs-de-lis in the corners. Recent scholarship suggests that this instrument may have been a part of a set made as a gift for the marriage of Philip II of Spain to Elisabeth Valois in 1559.
Performed on the Amati violin by Jörg-Michael Schwarz. Recorded at The Metropolitan Museum of Art, February 2010.
Heilbrunn Timeline of Art History: “Violin Makers: Nicolò Amati (1596–1684) and Antonio Stradivari (ca. 1644–1737)“ | <urn:uuid:57ec0cee-749d-4ab7-ba40-aa3d50e33d97> | {
"dump": "CC-MAIN-2014-23",
"url": "http://blog.metmuseum.org/guitarheroes/violin-ca-1559/",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510273012.22/warc/CC-MAIN-20140728011753-00024-ip-10-146-231-18.ec2.internal.warc.gz",
"language": "en",
"language_score": 0.9436463713645935,
"token_count": 330,
"score": 2.78125,
"int_score": 3
} |
Marx-Engels Subject Archive
"The first premise of all human history is, of course, the existence of living human individuals. Thus the first fact to be established is the physical organisation of these individuals and their consequent relation to the rest of nature....Men can be distinguished from animals by consciousness, by religion or anything else you like. They themselves begin to distinguish themselves from animals as soon as they begin to produce their means of subsistence, a step which is conditioned by their physical organisation. By producing their means of subsistence men are indirectly producing their actual material life.
"The materialist conception of history starts from the proposition that the production of the means to support human life and, next to production, the exchange of things produced, is the basis of all social structure; that in every society that has appeared in history, the manner in which wealth is distributed and society divided into classes or orders is dependent upon what is produced, how it is produced, and how the products are exchanged. From this point of view, the final causes of all social changes and political revolutions are to be sought, not in men's brains, not in men's better insights into eternal truth and justice, but in changes in the modes of production and exchange.
Particulars in theory and practice:
18th Brumaire of Louis Bonaparte (abstracts)
Chapter One: "Men make their own history, but they do not make it as they please; they do not make it under self-selected circumstances, but under circumstances existing already, given and transmitted from the past. The tradition of all dead generations weighs like an Alp on the brains of the living...."
Chapter Three: "As against the united bourgeoisie, a coalition between petty bourgeois and workers had been formed, the so-called social-democratic party.... the revolutionary point was broken off and a democratic turn given to the social demands of the proletariat; the purely political form was stripped off the democratic claims of the petty bourgeoisie and their socialist point thrust forward. Thus arose Social-Democracy.
Chapter Seven: "France therefore seems to have escaped the despotism of a class only to fall back under the despotism of an individual, and what is more, under the authority of an individual without authority. The struggle seems to be settled in such a way that all classes, equally powerless and equally mute, fall on their knees before the rifle butt. But the revolution is thoroughgoing. It is still traveling through purgatory. It does its work methodically."
Part I: Philosophy
§ 1: The idea that all men, as men, have something in common, and that to that extent they are equal, is of course primeval. But the modern demand for equality is something entirely different from that; this consists rather in.....
§ 2: Hegel was the first to state correctly the relation between freedom and necessity. To him, freedom is the insight into necessity. Freedom does not consist in any dreamt-of independence from natural laws, but in the knowledge of these laws.....
Part II: Political Economy
§ 1: Political economy, in the widest sense, is the science of the laws governing the production and exchange of the material means of subsistence in human society. Production and exchange are two different functions. Production may occur without exchange, but exchange -- being necessarily an exchange of products -- cannot occur without production.
§ 2: Private property by no means makes its appearance in history as the result of robbery or force. On the contrary. It already existed, though limited to certain objects, in the ancient primitive communities of all civilised peoples. It developed into the form of commodities within these communities.....
§ 3: The question at issue is how we are to explain the origin of classes and relations based on domination, and if Herr Dühring's only answer is the one word "force", we are left exactly where we were at the start.
§ 4: All religion, however, is nothing but the fantastic reflection in men's minds of those external forces which control their daily life, a reflection in which the terrestrial forces assume the form of supernatural forces. In the beginnings of history it was the forces of nature which were first so reflected, and which in the course of further evolution underwent the most manifold and varied personifications among the various peoples.
"What is Communism? Communism is the doctrine of the conditions of the liberation of the proletariat.
What is the proletariat? The proletariat is that class in society which lives entirely from the sale of its labor and does not draw profit from any kind of capital; whose weal and woe, whose life and death, whose sole existence depends on the demand for labor....
"The history of all hitherto existing society is the history of class struggles.... The modern bourgeois society that has sprouted from the ruins of feudal society has not done away with class antagonisms. It has but established new classes, new conditions of oppression, new forms of struggle in place of the old ones..... All previous historical movements were movements of minorities, or in the interest of minorities. The proletarian movement is the self-conscious, independent movement of the immense majority, in the interest of the immense majority.
Compiled by: T. Borodulina for On Historical Materialism (Marx, Engels, Lenin),
Published: in 1972 by Progress Publishers in the Union of Soviet Socialist Republics.
This revised compilation, with a new organisation, corrections and additions, was created by Brian Baggins in 1999-2000. | <urn:uuid:dc4bbd59-567f-4a41-a805-aec0f8f766c3> | {
"dump": "CC-MAIN-2015-06",
"url": "http://marxists.org/archive/marx/works/subject/hist-mat/index.htm",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-06/segments/1422115858171.97/warc/CC-MAIN-20150124161058-00058-ip-10-180-212-252.ec2.internal.warc.gz",
"language": "en",
"language_score": 0.9562849402427673,
"token_count": 1122,
"score": 2.515625,
"int_score": 3
} |
Dictyostelium Discoideum is a slime mold from the phylogenic order Ascrasiales within the phylum Myxomycophyta. What makes this mold very interesting from a scientific point of view is the fact that Dictyostelium Discoideum represents a junction between single and multi-cellular organisms. Being a meat eater Dictyostelium Discoideum grows vigorously as autonomous cells when, as a food source, bacteria are present. When the cells are depleted from the bacterial food source they join with other adjacent cells to form multi cellular structures. To survive this period of nutritional starvation Dictyostelium Discoideum may eventually form fruiting bodies containing spores to increase the rate of survival during starvation. The ability to select between uni-cellular and multi-cellular life forms makes Dictyostelium Discoideum and interesting model for cell-cell interactions and development.
The genomic content of Dictyostelium Discoideum is four times that of Saccharomyces Cerevisiae with about 50 Mb of low GC DNA (20 %) localised at six chromosomes. Functional heterologous proteins are excreted into the media correctly folded and glycosylated.
As a food source Dictyostelium Discoideum feed on bacteria. Escherichia Coli or Aerobacter Aerogenus are nutrional sources for Dictyostelium Discoideum. The bacterial cells are grown on the nutrient SM medium and Dictyostelium Discoideum feed on these bacteria. The mold cells, feeding and dividing on the bacterial layer, forms colonies of growing and dividing cells. As the colony grows, the local bacteria layer becomes depleted.
Subsequently the individual slime mold amoeba join together to form multi-cellular structures and finally forming fruiting bodies. Within 3 to 4 days on SM medium, Dictyostelium Discoideum, starting as a uni-cellular organism, becomes a multi-cellular life form capable of making spores to survive starvation conditions.
Some specific strains of Dictyostelium Discoideum are capable to grow axenically in a liquid medium without bacteria as food. Two types of media are available for culturing Dictyosteliium Discoideum cells.
Non defined complex media based on mainly Peptone and Yeast extract. Proteose peptone provides high molecular weight peptides and proteins as a nitrogen source. Yeast extract is a source of vitamins, co-factors and carbohydrates. Both components are often supplemented by additional buffers, Glucose and Magnesium. HL5 is a good example of a non defined complex medium routiniously used in the lab for culturing Dicty. | <urn:uuid:b2b564bd-973e-41ba-a844-c07011dfb26b> | {
"dump": "CC-MAIN-2018-30",
"url": "https://www.formedium.com/product-category/formedium-media/dictyostelium-discoideum/",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676594018.55/warc/CC-MAIN-20180722213610-20180722233610-00287.warc.gz",
"language": "en",
"language_score": 0.8773483037948608,
"token_count": 576,
"score": 2.96875,
"int_score": 3
} |
Transliterate Divide – The gap between people who have the skills to understand (read) and create (write) a message (information) and interact using a variety of tools across multiple media and platforms and the ability to apply those skills to new situations and formats and those who do not.
My interest in transliteracy is tied to the skills one needs to be transliterate and determining the role of libraries in the acquisition and development of such skills. Although the primary direction of my work and this blog is tied to the internet and digital content, transliteracy is not.
I’ve been reading. A lot. Reading about literacy and all the different types of literacies, technology, the digital divide and anything specifically written about transliteracy. My research and subsequent note-taking on a relatively new term (the practice is in no way new) has created a need for definitions. What about those who are not transliterate, un-transliterate, non-transliterate, transilliterate? I don’t know, I gave up on moved on leaving the decision, if any, to individuals wiser than me.
This lead to my next issue, since I’m more interested in the skills and the development of those skills, I am interested in the divide between those with the skills and those without them and what that divide represents. I need to talk about that divide and have a understanding of my meaning. Based on my knowledge of transliteracy, definitions of transliteracy, digital divide and literacy divide* I worked up the term transliterat divide and a definition. Is it needed? I don’t know. Will anyone other than me use it? I have no idea. Will I use? All signs point to yes. It is a working definition, not set in stone and certainly open to questions, suggestions and modifications .
- Transliteracy is the ability to read, write and interact across a range of platforms, tools and media from signing and orality through handwriting, print, TV, radio and film, to digital social networks.
- Literacy Divide – literacy divide of the 20th century distinguished between people who could functionally read and those who could not
- Digital Divide refers to the gap between people with effective access to digital and information technology and those with very limited or no access at all. It includes the imbalances in physical access to technology as well as the imbalances in resources and skills needed to effectively participate as a digital citizen
- Digital Divide – the gap between those individuals and communities that have, and do not have, access to the information technologies that are transforming our lives.
- Digital Divide- the divide between those with access to new technologies and those without
What I was reading
- Transliteracy: Crossing Divides
- beyond Caxton – the post-literate world
- Proust and the Squid: The Story and Science of the Reading Brain
- So Many Digital Divides to Bridge, So Little Time (and Resources and Money and Staff and….)
- What is Transliteracy? Yes, I’m asking again!
*Although the term “literacy divide” is used frequently I had trouble finding a definition for it.
- Commentary On the Digital Divide from the Chief Executives of Netflix & CommonSenseMedia
- Libraries in a Transliterate, Technology Fluent World #intlib10
- The Definition of Digital Literacy | <urn:uuid:93c8de61-7218-429d-8ba3-d56f588a57ba> | {
"dump": "CC-MAIN-2015-14",
"url": "http://librarianbyday.net/2009/11/23/transliterat-divide-working-definition/",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-14/segments/1427131299515.96/warc/CC-MAIN-20150323172139-00037-ip-10-168-14-71.ec2.internal.warc.gz",
"language": "en",
"language_score": 0.9451133012771606,
"token_count": 711,
"score": 2.96875,
"int_score": 3
} |
Here's where you can discuss all things Ancient Greek. Use this board to ask questions about grammar, discuss learning strategies, get translation help and more!
Homer ΄Ιλιαδ Book 9 line 455
μη ποτε γουνανσιν οισιν εφεσσεσθαι φιλον υιον εξ εμεθεω γεγαωτα
Is οισιν a dative pronoun or third person singular future of φερω?
- Textkit Neophyte
- Posts: 6
- Joined: Mon Aug 19, 2013 7:08 pm
οἷσιν (note the rough breathing) is dative plural masculine of the possessive pronoun ἑός/ὅς. "he will bring" would be οἴσει.
- Textkit Zealot
- Posts: 1577
- Joined: Sun Jun 21, 2009 8:07 pm
- Location: Colorado
Return to Learning Greek
Who is online
Users browsing this forum: jeidsath, Paul Derouda, Qimmik and 62 guests | <urn:uuid:3b8f43ee-44b9-4889-b392-2ed0149f7d85> | {
"dump": "CC-MAIN-2015-06",
"url": "http://www.textkit.com/greek-latin-forum/viewtopic.php?f=2&t=60376&p=157569",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-06/segments/1422115856041.43/warc/CC-MAIN-20150124161056-00220-ip-10-180-212-252.ec2.internal.warc.gz",
"language": "en",
"language_score": 0.6917208433151245,
"token_count": 276,
"score": 2.578125,
"int_score": 3
} |
How does All Quiet on the Western Front differ from a traditional coming-of-age novel, which charts the protagonist’s growth as an individual?
Erich Remarque’s All Quiet on the Western Front describes the young German soldier Paul Bäumer’s experiences in World War I, from his training to his death in battle. However, rather than show us how Paul grows as an individual, developing his own ideas and value system, the novel instead shows how Paul—along with his fellow soldiers—survives the war by doing precisely the opposite. The horrors of battle force the soldiers to develop animalistic instincts and a pack-like bond. There is no place for individuals in war, and therefore no place for a traditional coming-of-age tale.
The opening pages of All Quiet on the Western Front emphasize how war dissolves individual men into a single, collective identity. Most fictional autobiographies are narrated in the first-person singular, as the protagonist recounts his or her development from a child into an adult subject. However, Paul begins his tale by speaking not about himself but about his unit, using the third-person plural pronoun “we.” From the beginning, Paul is assimilated into the mass—a mass, moreover, that has been reduced to bodily functions and animal appetites. The third-person plural resonates throughout this first chapter as the soldiers operate as a single unit, motivated by the same communal desires: “we were growing impatient,” “we got excited,” “we were in just the right mood.” The emotions that drive this group arise not from elevated sentiments but rather from the most fundamental animal needs. What unite the soldiers, the reader discovers, are not the head and the heart, but the stomach and the intestines—full bellies and general latrines.
In order to survive the horrors of war, Paul must perform a type of human sacrifice, eradicating his feelings and sensibilities so that all that remains is, as he puts it, a “human animal.” In Chapter Seven, Paul describes how he must distance himself from his emotions and rely solely on automatic, animal instincts. In war, that which makes a person human can cost a soldier his sanity, if not his life. As Paul puts it, emotions—the qualities that make up individual human experience—are “ornamental enough during peacetime.” A soldier must not only discard his immediate emotional reactions to survive, but he must also sever his ties to the past and plans for the future. The war becomes the focal point of his universe, and his identity before or after becomes an irrelevant distraction. The only things that matter on the battlefield are the immediate physical stimuli: blood, hunger, bullets, and pain.
The soldiers are not only animal-like in the way that they reject human emotions and live completely in the present: The violent ways they struggle for power through the exercise of brute force also make them beastly. In explaining how a seemingly subservient postman like Himmelstoss could turn into such a bully as a drill-sergeant, Paul’s friend and fellow soldier, Kat, points out that the army’s power structure brings out the animals hidden within human beings. Human civilization is just a veneer, Kat argues, and humans have more in common with the animal kingdom than they would like to admit. When he participates in viciously swarming the unsuspecting Himmelstoss, Paul himself illustrates Kat’s point by engaging in behavior more appropriate to a savage herd animal than to a rational human individual.
If, as Kat argues, it is the structure of the army that is responsible for bringing out the soldiers’ collective-minded, animal side, then perhaps armistice will enable these men to recapture their individual humanities. Yet for Paul, the prospect of armistice does not seem to promise a return to the human community. Paul imagines that any return to civilized society will be a profoundly alienating experience, one in which “men will not understand” him and in which veterans of his generation will become “superfluous.” His war experience has excluded Paul from the general civilian community, and now the only form of community he can rely on is the animalism of his fellow soldiers. As Paul voices his fear that his generation will fail to “adapt” to the civilized world, his use of Darwinian language draws a final link between the human and animal kingdoms, suggesting that war not only turns the soldier from a human individual into an animal, but that by doing so it ineradicably alters the individual’s ability to relate to other humans. | <urn:uuid:3b290715-1946-4a74-b961-e35ad25b9356> | {
"dump": "CC-MAIN-2019-30",
"url": "https://www.sparknotes.com/lit/allquiet/a-plus-essay/",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195526153.35/warc/CC-MAIN-20190719074137-20190719100137-00422.warc.gz",
"language": "en",
"language_score": 0.9703489542007446,
"token_count": 968,
"score": 3.03125,
"int_score": 3
} |
Whether we’ve managed to buy our dream home or are simply dreaming of having a home, few things matter to us more than where we live. Our homes can be a large part of our identity. But they’re much more than that. Research demonstrates that decent housing is fundamental to our health.
If home is one of the most important things to both our health and well-being, you would imagine it would be at the top of every political platform. But in Toronto, where we have just completed a provincial campaign, are in the midst of a municipal campaign, and have a federal campaign on the way, housing has not emerged as a campaign theme.
This is a surprising omission in a city where close to 78,000 households are waiting for social housing, with almost 160,000 waiting Ontario-wide (that’s 3 per cent of all households). And these figures do not capture the fact that women list lack of affordable housing as a barrier to leaving violent situations, or that, in Toronto alone, more than 70,000 households are living at risk of homelessness and around 4,000 individuals and families use city shelters on any given night. Nor do they capture the barriers to getting housing such as racism, homophobia and other forms of discrimination.
All of this is taking a huge and well-documented toll on our health. In 2009, researchers followed 1,200 people in Toronto, Ottawa and Vancouver who were homeless or at risk of homelessness. It was found that they experience a high burden of serious health problems like asthma, high blood pressure and chronic obstructive pulmonary disease. They are also at high risk for conditions like depression and anxiety, and of going hungry.
There’s more. We know that housing in disrepair can lead to accidents, fires and infestations. That overcrowding can lead to infections. We also know that, if you develop an illness, it is more difficult to get better if you are homeless or live in a substandard home.
Finally, we know the cost of housing deeply affects our health. When it takes up a large percentage of our income, it can cause profound stress and crowd out things that are important for health like recreational activities, nutritious food and prescription medication.
The housing crisis is affecting health care use, too. Researchers looked at health care records for 1,165 adults using homeless shelters and meal programs in Toronto and found much higher than average rates of emergency department use and hospitalization. For people experiencing homelessness or housing vulnerability in Toronto, Ottawa and Vancouver, researchers found that 55 per cent had visited the emergency department and 25 per cent had been hospitalized at least once in the past year.
Ontarians recognize the connection. A recent study demonstrated that many see a strong link between social factors like housing and health. Forthcoming related research demonstrates that Ontario residents support strong government action on basics like recreation and, yes, housing.
This is consistent with the rest of Canada. Last year a national survey by the Federation of Canadian Municipalities confirmed that nearly three-quarters of Canadians want a national long-term affordable housing plan and more than two-thirds want increased government funding for housing and homelessness.
Ontario has a housing crisis. And a housing crisis is a health crisis. The public knows it, and wants change. Now it’s time for the new provincial government — along with incoming municipal and federal governments — to engage in an urgent and long-overdue conversation about housing and how to reverse these negative trends through good policy and action on affordable housing.
When Ontario’s new provincial government gave its speech from the throne on Thursday, it included a commitment to housing and homelessness funding and programs. Now is a very good time to turn high-level promises into brick-and-mortar realities
We have international and Canadian research demonstrating the way forward. And if further help is needed, we are here along with many others, ready and able to work to implement solutions. The ingredients are all there. Now it’s time to demonstrate the vision and political leadership to make sure every single one of us has a decent place to call home.
Dr. Stephen Hwang is a practising physician in general internal medicine at St. Michael’s Hospital and a research scientist at the Centre for Research on Inner City Health (@CRICH_StMikes) in Toronto. @StephenHwang.
Dr. Kwame McKenzie is CEO of the Wellesley Institute and a medical director responsible for Dual Diagnosis, Child Youth and Family and Geriatric services and Director Health Equity at CAMH. | <urn:uuid:25eb0d3e-0f78-4bea-9d34-f70dd91820df> | {
"dump": "CC-MAIN-2019-30",
"url": "https://www.thestar.com/opinion/commentary/2014/07/06/ontarios_housing_crisis_is_also_a_health_crisis.html",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195525793.19/warc/CC-MAIN-20190718190635-20190718212635-00508.warc.gz",
"language": "en",
"language_score": 0.9617562294006348,
"token_count": 928,
"score": 2.796875,
"int_score": 3
} |
Etihad Airways is the first airline in the region to operate a single-use plastic free flight.
In a bid to raise awareness of the effects of plastic pollution, single-use plastic free flight EY484 took off from Abu Dhabi and landed at Brisbane Airport on Earth Day, 22 April 2019.
The milestone flight was part of Etihad’s ongoing commitment to the environment, to go beyond Earth Day celebrations, and pledge to reduce single-use plastic usage by 80 per cent not just in-flight, but across the entire organisation by the end of 2022.
What’s the challenge?
No other human activity pushes individual emission levels as fast and as high as air travel. Alongside that stark fact, airlines have been copping flak for years for their seemingly needless over-use of single-use plastics and non-existent recycling practices.
How many times have you cursed under your breath while you’re desperately trying to find the tiniest piece of tray real estate after having torn all the plastic wrapping off every single item on it?
And then thought, what happens to all that plastic anyway?
What’s the solution?
Etihad Airways recently revealed that it uses some 27 million single-use plastic coffee cup lids every year.
That admission was a kicker to take action and on 22 April, it became the first major airline to make a long haul flight with no single-use plastics on board.
To make this new initiative happen, Etihad identified over 95 single-use plastic products that are used across its aircraft cabins.
Once removed from the Earth Day flight, Etihad says they prevented over 50 kilograms of plastics from ending up in a landfill.
Guests on board the flight enjoyed replacement products including sustainable amenity kits, award-winning eco-thread blankets made out of recycled plastic bottles, tablet toothpaste and edible coffee cups while children were handed out eco-plush toys.
“There is a growing concern globally about the overuse of plastics, which can take thousands of years to decompose,” says Tony Douglas, the group’s chief executive, explained.
“We discovered we could remove 27 million single-use plastic lids from our in-flight service a year and, as a leading airline, it’s our responsibility to act on this, to challenge industry standards and work with suppliers who provide lower-impact alternatives.”Tony Douglas, Etihad Chief Executive
What will the positive impact be?
As a result of planning the Earth Day flight, Etihad has additionally committed to removing up to 20 per cent of the single-use plastic items on board by 1 June 2019.
By the end of this year, Etihad will have removed 100 tonnes of single-use plastics from its inflight service.
Find out more: www.etihad.com.au
How can you travel to change the world?
Congratulations! By reading this post and taking some of these insights on board, you’ve already made a difference.
Now you can easily create your impact by sharing your new-found knowledge with other friends who you think would also be interested.
Ultimately, responsible travel comes down to common sense – stay curious, keep yourself up-to-date with the challenges at hand and make yourself accountable for your actions on your travels. | <urn:uuid:2d5de671-cc23-4dcc-9576-66e80d7437ec> | {
"dump": "CC-MAIN-2023-40",
"url": "https://traveltochangetheworld.com/the-regions-first-single-use-plastic-free-flight-touches-down-in-brisbane-on-earth-day/",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233510326.82/warc/CC-MAIN-20230927203115-20230927233115-00399.warc.gz",
"language": "en",
"language_score": 0.9409336447715759,
"token_count": 690,
"score": 2.53125,
"int_score": 3
} |
Up to 75 scientists working in a U.S. government laboratory in Atlanta may have been exposed to live anthrax bacteria, the Centers for Disease Control and Prevention (CDC) said on Thursday.
The CDC says it immediately began treating individuals potentially exposed to the deadly bacteria, and a spokesman said the risk of infection was "very low."
According to Reuters, the exposure happened on June 13 after researchers working at a high-level lab failed to deactivate the anthrax before transferring samples to a lab without the capability to handle live anthrax. Up to seven scientists may have had direct contact with the anthrax, but the CDC offered 75 employees a 60-day antibiotic treatement and an anthrax vaccine.
Dr. Paul Meechan, director of the environmental health and safety complicance office at the CDC, said the agency is investigating the exposure to determine how it happened and whether it was intentional.
Anthrax typically incubates for five to seven days before affecting the infected individual. Anthrax bacteria release spores that go dormant until reaching a host. The spores can survive in the open environment for extended periods of time, even "decades," according to the CDC.
Anthrax gained worldwide attention in October 2001, when an envelope filled with anthrax powder was sent to Senate Majority Leader Tom Daschle at the Hart Senate Office Building. | <urn:uuid:b5e7c8e7-7bcd-4f0f-97bb-aa4e8704a0bf> | {
"dump": "CC-MAIN-2017-04",
"url": "http://www.ibtimes.com/75-government-scientists-exposed-live-anthrax-bacteria-1606694",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283689.98/warc/CC-MAIN-20170116095123-00241-ip-10-171-10-70.ec2.internal.warc.gz",
"language": "en",
"language_score": 0.950877845287323,
"token_count": 273,
"score": 2.71875,
"int_score": 3
} |
Life can really work in mysterious ways and present irony. Showering is one of the things we do in our lives on a daily or weekly basis to maintain hygiene, but many don’t know that showerheads contain bacteria that can be alarming to our health.
Hopping in the shower is intended to abandon us feeling new, perfect and strengthened. A large portion of us put in hours seven days scouring, buffing and getting a charge out of a hot shower to stimulate us in the morning or send us off to rest during the evening. Be that as it may, startling new research uncovers the shower that leaves a unit could be more risky and dirtier.
Scrubbing down might abandon you feeling reinvigorated – however it could be terrible for your wellbeing. About 33% of showerheads harbor possibly hazardous microorganisms to be splashed over their proprietors.
Researchers say the ooze that develops inside a showerhead is a rearing ground for bugs connected to a large group of ailments. The ooze shields the germs from the chlorine in the water, which is intended to dispose of them.
Educator Norman Pace, who drove the learn at the University of Colorado at Boulder, stated: ‘On the off chance that you are getting a face brimming with water when you first turn your shower on, that implies you are presumably getting an especially high heap of Mycobacterium avium, which may not be excessively sound.’
The scientists inspected the sludge prowling inside 45 showerheads from homes and open structures in nine urban areas crosswise over America.
They recognized abnormal amounts of M. avium, a relative of the germ that causes TB.
Water splashing from the shower conveys the microscopic organisms in beads that can without much of a stretch be breathed in profound into the lungs.
Those with debilitated safe frameworks, for example, pregnant ladies, the elderly and the individuals who are battling off different sicknesses are more defenseless, they report in the diary Proceedings of the National Academy of Sciences.
Another germ, Stenotrophomonas maltophilia, likewise flourishes operating at a profit “gunk” that lines shower heads and taps. It slaughters around 300 Britons a year.
In any case, it resembles whatever else, there is a hazard connected with it. The analysts discovered metal showerheads were home to far less pathogens than plastic ones. They said changing to a metal showerhead – particularly one with a channel than can be changed routinely – can help lessen the development of microscopic organisms.
Venturing outside the space for a moment in the wake of turning the shower on can likewise diminish the probability of breathing in pathogens that get driven out of the give head the principal burst of water.
Microbes in showerheads created a standout amongst the most scandalous instances of waterborne contamination, the savage episode of legionnaires’ sickness in America in 1976.
It takes 2 gallons of water to brush your teeth, 2 to 7 gallons to flush a toilet, and 25 to 75 gallons of water to shower. Showerheads purchased prior to 1992, commonly delivered 5 – 7 gallons of water per minute. Since 1992, government guidelines mandated manufactures to make showerheads that deliver no more than 2.5 gpm at 80 psi. These are considered low flow showerheads. | <urn:uuid:5991222a-b96c-466d-b09a-11a2bd175f91> | {
"dump": "CC-MAIN-2018-05",
"url": "https://blog.antaplumbing.com/the-underlying-truth-of-showerheads/",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084887054.15/warc/CC-MAIN-20180118012249-20180118032249-00504.warc.gz",
"language": "en",
"language_score": 0.9195659160614014,
"token_count": 690,
"score": 2.59375,
"int_score": 3
} |
Qualitative research is important because it generates data that can provide in depth insight into a question or topic. However, in order to draw conclusions from qualitative data, it is essential to quantify the data. “Qualitative researchers may criticize [the] quantification of qualitative data, suggesting that such an inversion sublimates the very qualities that make qualitative data distinctive: narrative layering and textual meaning. But assessment in the university (and the policy implications that flow from it) demands that the data are presented within a scientific construct.” (1) In addition, “until we know more about how and why and to what degree and under what circumstances certain types of qualitative research… can usefully or reliably be quantified, it is unlikely that program planners or policy makers will base decisions on studies generally regarded as ‘qualitative.’” (2)
Therefore, it is important to quantify the data obtained from qualitative research. Quantitative analysis of qualitative data “involves turning the data from words or images into numbers. This can be done by coding ethnographic or other data and looking for emerging patterns.” (3) If qualitative data is in the form of responses to standardized questionnaire surveys, this data may also be quantified. Simple frequencies and relationships between variables can be calculated either manually, or by using qualitative software, such as EZ Text. For example, a researcher studying smoking habits utilized a frequency table to describe the smoking that occurred in specific contexts. The definitions of these "contexts" were derived from interview data generated from in-depth interviews with youth.(4)
There are three main steps to conducting a quantitative analysis of qualitative data: organizing the data, reading and coding it, and presenting and interpreting it.
First, the researcher should organize the data. The data can be organized in groups which relate to particular areas of interest. For example, a study on tobacco farmers might group data into the following sections: history of tobacco farming, other crops grown, role of women in tobacco farming, reasons for tobacco farming and environmental consequences of tobacco farming. (5)
The next step is to read all of the data carefully and construct a category system that allows all of the data to be categorized systematically. The categories should be internally homogeneous and externally heterogeneous. "Everything in one category must hold together in some meaningful way and… the differences between categories need to be bold and clear.” (6) If there is a lot of data that doesn’t fit into the category system, it usually means that there is a flaw that requires the system to be reorganized. Good qualitative research will have a label for all data, and every attempt should be made so that each segment fits in only one category. Lastly, the classification system should be meaningful and relevant to the study. Once a system created for organizing the data, each category should be assigned a number, and then transcriptions of interviews or survey results can be coded. (7)
Included below is an example of a coding system, followed by a coded interview: (8)
1. The smoking period
1.1. When started and stopped
1.2. Reasons for starting smoking and continuing to smoke
2. Circumstances in which smoking takes place
3. Influences on smoking behavior
3.1. Home environment
3.3. Work environment
4. Reasons for stopping
4.2 Cultural or religious
4.4 "Significant other" pressure
4.5 Consideration for others
5. Ways to stop
5.1 Based on experience of respondent
5.2 Based on opinion of the respondent
This is an example of a portion of an interview with the categories assigned to segments of the text. (9)
Interviewer (I): How did you start smoking?
After the data is coded, the data should be displayed and organized so that it can be interpreted. Often simple matrices or charts can be used to compile interview data so that patterns can be determined among respondents. Causal network diagrams and flow charts are also often helpful to assess the cause and effect of relationships that appear in the data. (10) In order to analyze the data, the use of a computer-assisted qualitative data analysis program is suggested. Such programs link code with text in order to perform complex model building and help in data management.(11) For example, EZ-Text is a software program which is useful when working with responses to open-ended questions in a standardized survey. When the same questions are asked in every interview, EZ-Text can be used to quantify the results of an analysis, indicating the frequency of particular responses to each question. This is just one example of a computer program, and there are many other available options that depend on the exact nature of the research and the size of the database. The coding and analysis of data in qualitative research is done differently for each study and depends on the research design, as well as the researcher’s skill and experience. Regardless of the study, it is always essential to clearly document how the data was coded and interpreted, and it is important to quantify it in order to draw conclusions. (12)
(1) Ward, T. “Quantifying Qualitative Data.” Accessed on 9 December 2010.
(2) Green, E. “Can Qualitative Research Produce Reliable Quantitative Findings?” Field Methods. 13.1 (2001): 3-19. Accessed on 8 December 2010.
(4) “Module 6: Qualitative Data Analysis.” Accessed on 9 December 2010.
(10) “Module 6: Qualitative Data Analysis.” Accessed on 9 December 2010.
(11) Ward, T. “Quantifying Qualitative Data.” Accessed on 9 December 2010.
(12) “Module 6: Qualitative Data Analysis.” Accessed on 9 December 2010. | <urn:uuid:b65cd87e-d035-4ebc-863a-b44bdc4b6560> | {
"dump": "CC-MAIN-2016-07",
"url": "http://www.uniteforsight.org/global-health-university/quantify-research",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701163663.52/warc/CC-MAIN-20160205193923-00187-ip-10-236-182-209.ec2.internal.warc.gz",
"language": "en",
"language_score": 0.9189274907112122,
"token_count": 1218,
"score": 3.734375,
"int_score": 4
} |
I have written a blog about identifying and categorizing Spanish apps. As I’ve been thinking about the present state of modern language /foreign language apps, I’ve realized that the inadequacies of these language apps present great learning opportunities for our students.
Students can look at and do a vocabulary or phrase modern language app /foreign language app such as Learn Spanish ((Droid) or Hola (Droid)
– Students can analyze what important vocabulary is missing from the topic and make a supplementary list. For example, the housing category may have tableware but not bed or chair.
– If the app only presents individual words, the students can create a meaningful target language sentence or question for each word. For example, for the word “lake”, the students may ask “What is your favorite lake?”
– Students can analyze what important phrases or questions are missing and can create those lists. They may see look at a “time”category but they find that the question “When?” is missing. They make up a question using that question word.
– They can analyze what important topics are missing from the app. Perhaps the app has housing and animals but does not have occupations and city places.
– They can see how many meaningful sentences they can create from the present vocabulary list.
– They can answer any questions given in the app. For example, they can answer “How much does this cost?” with the price of a shirt.
– They can rearrange the questions or statements to create a logical conversation about the topic.
– They can think of a typical language task for a topic such as having a dirty spoon on the restaurant table and use the existing sentences and add others to be able to get a clean spoon.
In this way, students go from consumers to producers. They analyze what they are doing to see what is missing. They think about critical vocabulary, phrases, and topics instead of simply doing a drill program. They do not just repeat but they answer or comment on. They build on. The students become language users!
How do your students deal with modern language apps that do not do everything well?
I originally published this blog at my eduwithtechn site
I have developed many Spanish activities that allow students to begin to express themselves and to begin to move toward spontaneous speaking as in a natural conversation. My Spanish spontaneous speaking activities (20+) includes Modified Speed Dating (Students ask a question from a card-whole class), Structured Speaking (Students substitute in or select words to communicate in pairs), Role Playing (Students talk as people in pictures or drawing from 2-4 people) and Speaking Mats (Can talk using a wide variety of nouns, verbs and adjectives to express their ideas- pairs or small group), Spontaneous Speaking (based on visuals or topics in pairs), and Grammar speaking games (pairs or small group). Available for a nominal fee at Teacherspayteachers: http://bit.ly/tpthtuttle
My three formative assessment books: http://is.gd/tbook | <urn:uuid:f21cafaa-42b1-446c-a908-bcb3d99ceae6> | {
"dump": "CC-MAIN-2018-51",
"url": "https://modernlanguagest.wordpress.com/tag/spontaneous-speak/",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376826856.55/warc/CC-MAIN-20181215105142-20181215131142-00315.warc.gz",
"language": "en",
"language_score": 0.9425790905952454,
"token_count": 641,
"score": 3.484375,
"int_score": 3
} |
Subsets and Splits