text
stringlengths 198
621k
| id
stringlengths 47
47
| dump
stringclasses 95
values | url
stringlengths 15
1.73k
| file_path
stringlengths 110
155
| language
stringclasses 1
value | language_score
float64 0.65
1
| token_count
int64 49
160k
| score
float64 2.52
5.03
| int_score
int64 3
5
|
---|---|---|---|---|---|---|---|---|---|
I am a Suzuki violin teacher of three years just about to move my program to a local school where I'm really hoping it will take off. I would like to speak up for the Suzuki method because I grew up in both methods, traditional piano, and Suzuki violin. Also, I currently teach Suzuki and I do so in a balanced way. The theory of Suzuki is that we teach music using the mother tongue method. When one learns to speak, one does not learn to read, then learn to speak. Rather, a child hears his parents talking and copies those sounds. Does this mean we never teach how to read notes??? Absolutely not. I introduce reading by the end of book one. By the time my kids are at the end of book three, they are learning all their songs by the written music. They still listen to the recordings however because I find that very important.
At what age should you start Suzuki? I would say between 4 and 6 if possible. I would for sure try to get the little ones in a Kindermusik program though! I am also in the process of getting my license to teach that as well. Musical training at a young age does wonders on the ever-developing brain of a child. Kindermusik is also a wonderful bonding time for you and your child, and brings a whole new side of music into the home!
I know that I am biased because I'm a music educator. However, I do believe that music is deeply emotional. Most people love music in one form or another. By playing music with your children, or attending classes with your children, you are greatly enriching their little lives. | <urn:uuid:af9e6a78-0b7d-441d-9626-b3be165724ed> | CC-MAIN-2021-31 | https://www.risingcuriosity.com/suzuki-amp-kindermusik-from-holly-m/ | s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154796.71/warc/CC-MAIN-20210804045226-20210804075226-00604.warc.gz | en | 0.980785 | 334 | 2.546875 | 3 |
There are very few organisms that can survive in a vacuum, and usually fruit fly larvae are not one of them.
But almost by accident, researchers have discovered a way to create a “nanosuit” around the insects that allows them to survive in a vacuum, or space devoid of matter, for more than an hour.
Researchers at the Hamamatsu University School of Medicine in Japan were studying a series of organisms under a scanning electron microscope, a device so powerful that subjects must be viewed in a vacuum because even air molecules will distort the image. Most of the organisms died within seconds of being placed in the microscope’s viewing chamber, as their bodies shriveled and warped in the vacuum — but to the researchers’ surprise, the fruit fly larvae wriggled on as if nothing unusual was happening, and later matured with no adverse effects.
A closer look at this phenomenon revealed that the electron radiation that a scanning electron microscope uses to compile its images was combining with a naturally occurring filmy layer on the larvae’s surface. The result was a polymer, or sturdy chemical structure comprised of a sequence of molecules, that protected the larvae from the adverse effects of vacuum without even restricting their movement.
The researchers next looked into recreating nanosuits for insects that don’t naturally have the filmy extracellular material that preserved the fruit fly larva. Now, in a paper published by the Proceedings of the National Academy of Sciences, Dr. Yasuharu Takaku and his team report that by dousing insects in a common emulsifier commercially known as Tween 20 and then bombarding them with plasma, they were able to create an artificial “nanosuit” that protected the insects from the adverse effects of the vacuum for up to 30 minutes.
These findings have a number of implications, from the immediately practical to the wildly speculative. For one, researchers will be able to look at organic life under a scanning electron microscope without it dying from vacuum exposure (though the radiation from the microscope will still eventually kill it). The Hamamatsu University researchers are now looking into the possibility of a nanosuit that will shield organic life from radiation as well as vacuums. The discovery of a biocompatible membrane that can be created in one step is also likely find quick application in commercial and academic engineering, the researchers note in their report.
The discovery also opens up many theoretical possibilities: Is it possible for small organisms with naturally occurring nanosuits to survive interstellar travel? And will scientists someday be able to fashion thin membranelike suits that allow humans to survive in a vacuum?
“Association of life with a nanosuit now expands our concept of the conditions under which life can survive,” the researchers’ report concludes. “This may be the start of a new era of improved understanding not only of surface biology but of a range of other scientific areas as well.”
- 7 Fun Ways to Use LED Lights
- 4 Ways Your Data Helps Scientists Track Earthquakes
- Insects and Kelp Are Meat and Veggies of the Future?
© 2012 TechNewsDaily | <urn:uuid:c738fdf5-fa5d-406b-b56d-53d8820cfa2e> | CC-MAIN-2016-36 | http://www.nbcnews.com/id/51572438/ns/technology_and_science-tech_and_gadgets/ | s3://commoncrawl/crawl-data/CC-MAIN-2016-36/segments/1471982298875.42/warc/CC-MAIN-20160823195818-00059-ip-10-153-172-175.ec2.internal.warc.gz | en | 0.956268 | 645 | 3.59375 | 4 |
On Air Now
Smooth Classics at Seven with Zeb Soanes 7pm - 10pm
Catherine Bott picks some of the most important, interesting and entertaining overtures, which have preceded the curtain rising on operas, plays and even movies.
The opera Euridice was written by Jacopo Peri in 1600 for the marriage of King Henry IV of France to Maria de Medici. It is the earliest opera to have survived to the present day. Even that far back, Peri opens with a brief instrumental prologue.
As a musical form, the French overture first appears in the court ballet and operatic overtures of Jean-Baptiste Lully which he elaborated from a similar, two-section form called Ouverture, found in the French ballets de cour as early as 1640. This French overture consists of a slow introduction followed by a lively movement.
The French ouverture style was also used in English opera, most notably in Henry Purcell's 1688 work, Dido and Aeneas. Its distinctive rhythm and function led to the French overture style as found in the works of late Baroque composers such as J.S. Bach.
Handel also unusually used the French overture style in some of his Italian operas, including Giulio Cesare in Egitto – Julius Caesar in Egypt – written in 1724.
Italian overtures often detached from their operas and played as independent concert pieces became important to the early history of the symphony. Such was the case for Mozart’s overture to his opera, The Abduction from the Seraglio. Similar to the later Magic Flute overture, this one opens quietly and is then interrupted by loud passages similar to the Turkish military band music later in the opera.
In 19th-century opera the overture, Vorspiel, Einleitung or Introduction became clearly defined as the music which takes place before the curtain rises. Richard Wagner's Vorspiel – or Prelude – to Lohengrin is a short self-contained movement founded on the music of the Grail later in the opera. Photo: Bill Cooper
Although by the end of the 18th century opera overtures were already beginning to be performed as separate items in the concert hall, the "concert overture", intended specifically as an individual piece without reference to stage performance and generally based on some literary theme, began to appear early in the Romantic era. This 1826 overture by Mendelssohn is generally regarded as the first concert overture.
A 20th-century parody of the late 19th century concert overture, scored for an enormous orchestra with organ, additional brass instruments, and obbligato parts for four rifles, three Hoover vacuum cleaners (two uprights in B♭, one horizontal with detachable sucker in C), and an electric floor polisher in E♭; it is dedicated to President Hoover!
Even films get overtures sometimes. Lawrence of Arabia was composed by the then relatively unknown Maurice Jarre in just six weeks, after both William Walton and Malcolm Arnold had proved unavailable. Jarre won his first Oscar for the music which is now considered one of the greatest movie scores of all time. | <urn:uuid:ae28384f-c690-435d-83cd-f12d4965936a> | CC-MAIN-2023-40 | https://www.classicfm.com/radio/shows-presenters/everything-you-ever-wanted-know/overtures-and-openings/ | s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233510326.82/warc/CC-MAIN-20230927203115-20230927233115-00480.warc.gz | en | 0.969028 | 667 | 3.078125 | 3 |
About This Project
Saddle-billed Storks are large, charismatic birds that are rare in captivity. The Dallas Zoo was the first zoo to successfully hatch a live chick and was then able to aid other zoos to have similar successes with their birds. Over the past five years, reproductive success has decreased dramatically throughout the SSP population. To rule out possible nutritional factors, this study has been initiated to evaluate the vitamin and mineral blood levels in the captive population. | <urn:uuid:f80caf29-5f9a-4a29-99f8-6d2a42c23117> | CC-MAIN-2021-21 | https://experiment.com/u/XIBQoA | s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989916.34/warc/CC-MAIN-20210513111525-20210513141525-00091.warc.gz | en | 0.969007 | 94 | 2.953125 | 3 |
University of Utah biochemistry professor Janet Iwasa spent months in Hollywood learning to animate so that she and other researchers could visualize molecules in a new way. Now, she spends months building complex molecular models on her computer.
“Why were we all relying on simplistic sketches — representing molecules as circles or squares, for example — when we could be creating three-dimensional and dynamic animations of how we thought things might actually look?” Iwasa asked. “So I started taking animation classes and created my first scientific animations of what my lab studied.”
Scientists know much more about cells than what can be seen in a two-dimensional model. Iwasa was working toward her Ph.D. in cell biology at the University of California, San Francisco, when she first came across the blending of animation and science.
“In cell and molecular biology, many of us are trying to imagine how different cellular processes work at a molecular level,” Iwasa said. “These processes often involve numerous proteins all moving around, interacting with each other, changing shape and sometimes forming larger complexes. It can be hard to wrap your head around things like this when there can be so many moving parts. That’s where animation can come in and help researchers explore how a process might occur.”
Iwasa calls this “visual hypotheses,” or in other words, models of ideas that are based on scientific data and allow researchers to interact with and test the theories optically. This new concept offers a new look into cellular processes and takes scientists a step further in gaining a better understanding of what makes up the world around us.
A great deal of research goes into each of Iwasa’s projects. One of Iwasa’s current projects is working on demonstrating the molecular life cycle of HIV. For this, Iwasa takes into account over 20 years of research into understanding how HIV works at the molecular level so that she can accurately create a detailed animation of what the international science community knows about HIV. The 10-minute film is expected to be released in the spring.
Over the years, Iwasa has gained recognition for her work from numerous media outlets and science organizations, including Foreign Policy, Fast Company, TED and the National Science Foundation. Having published 13 animations, Iwasa is keeping busy with not only developing more, but also teaching up-and-coming scientists.
“I’m co-teaching this class with professor Lien Fan Shen in the Film and Media Arts Department,” Iwasa said. “The course is called Animating Biology, and in it, groups of students — generally coming from either art or biology backgrounds — are working together with a faculty member or a local biotech company to create a scientific animation. It’s been a lot of fun, and we’re hoping to continue this course in the future to train a new generation of scientific animators.”
Iwasa hopes scientific animations continue to gain traction, as she feels they have the potential to break down barriers between the general population and the scientific community.
“Molecular animations can give people a fascinating window into what a scientist thinks without letting things like jargon get in the way,” Iwasa said. | <urn:uuid:bb53d5cc-c688-4b45-b996-1c100dae7302> | CC-MAIN-2018-47 | http://dailyutahchronicle.com/2017/11/01/u-professor-breaks-scientific-barriers-animation/ | s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039742793.19/warc/CC-MAIN-20181115161834-20181115183122-00044.warc.gz | en | 0.944327 | 678 | 3.25 | 3 |
Most people who walk along the waterside below the road and rail bridges will not notice a plaque on the roadside wall which proclaims:
BOROUGH OF SALTASH:
This plaque commemorates the closing
of the ferry across the River Tamar
at Saltash on the 23rd day of
October 1961, after more than
700 years of service.
For the greater part of that time the Saltash ferry was the most important in the West of England and the Borough of Saltash derived a good proportion of its income from it. Certainly the £35,000 compen¬sation paid to the Borough by the Tamar Bridge Joint Authorities was less than generous
Saltash Passage is documented as early as the 13th century; and Douglas C. Vosper in his ‘The Ancient Ferry at Saltash’ declares there is evidence that it was an important crossing at the time of the Norman Conquest. The rights of ferry at Saltash belonged to the Valletort family from the years following the Norman Conquest until 1270, when Roger de Valletort sold Trematon Castle and estate to Richard, Earl of Cornwall. Thereafter, the Earl's bailiff received the rents which fell due to the manor for Saltash Passage, in the 1290’s they amounted to £6.18s. a year.
View across the river towards Saltash with St Nicholas and St Faith church on top of the hill in pre steam ferry days
The ferry (originally known as a steam bridge) was ready for service by the end of 1832, and according to the newspaper "Western Luminary", did its trial run in 4½ minutes. The improved trade must have been short-lived, because the same ferry system was introduced at Torpoint in 1834, with a second for reserve in 1836.
To accommodate the new Saltash chain-ferry which had an overall length of 80 feet with the engine midships and vehicle decks on either side, it was necessary to move the landing place on the Devon side down river, so that the floating bridge could approach the Saltash beach at a better angle. The relative positions of Ashtor Rock and the Passage House Inn made a fixed approach on the old line impossible. Consequently an embankment was made south-wards along the beach to a point near Little Ash Quarry – later Tea Gardens - where there was a suitable hard beach. Wolseley Road did not exist in 1831. It and New Road, Saltash, were authorised by a Turnpike Act of 1833, and land for New Road was purchased in 1834.
Date in service
Date out of service
Cost & notes
Fate of ferry
(less than 3½ years)
James M. Rendel
Court case to regain ferry rights
Broke down, beyond repair
Oar propelled horse boat reintroduced
Ratcliffe of Mountbatten (wood)
New 25 year lease £195/year
Sank on slipway 1865
Plymouth Foundary and Iron Works. Hull Steel, superstructure wood
Engines from ferry 2 salvaged and reused on this ferry
Sold for scrap
Willoughby's of Plymouth (steel)
Cost £2,200. Major repairs 1896. Horse boat in service
Sold for £75 to Vick Bros. of Plymouth
Willoughby's of Plymouth (Steel)
1913 Horse boat sold for £2.10s
Ferry sank while under tow to a scrap-yard
Philips of Kingswear Dartmouth
Cut in half and widened to take four lanes of cars.
Sank on way to breaker’s yard in Ireland.
Thorncroft of Southampton
Sold to King Harry Ferry and later converted from steam to diesel electric.
In 1974 the ferry sank while being towed to Spain.
Ferry 3 looked very similar to the previous ferry. It was powered by the salvaged engines from Ferry 2 and was the last wooden ferry; it cost £1,300.
Ferry 4’s arrival was greeted with great euphoria by the people of Saltash, but this was short lived when the ferry broke down after only a few weeks. Luckily the old ferry was still available; although it had been sold for scrap it was brought back into service.
In 1911 a new larger ferry (ferry 5) designed by Mr. Tobias Bickle and again built by Willoughby's of Plymouth at a cost of £3,500 entered service. She carried two rows of four vehicles each, with their horses, and had a top deck on each side for foot passengers. To accommodate the new vessel, the chain-pipes and moorings were widened to 33½ feet, with wells containing three ton weights to act as springs.
On 15th September 1927 a new and larger ferry (ferry 5) than the 1911 boat, was purchased from Philip and Sons of Kingswear, Dartmouth.
Built in 1933 Ferry 7 suffered a major problem with her boiler and one of the driving chain wheels. Spare parts were hard to come by so the ferry spent over a year on the stocks at Saltash Passage,
Ferry 2 from a drawing in Saltash Guildhall | <urn:uuid:550231f0-88fc-4e4e-8f8a-943894d87679> | CC-MAIN-2022-05 | https://saltash.org/saltash-waterside/saltash-ferry.html | s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320300574.19/warc/CC-MAIN-20220117151834-20220117181834-00659.warc.gz | en | 0.965406 | 1,176 | 2.953125 | 3 |
The recommended schedule of vaccines for children is safe and has done much to dramatically lower the incidence of devastating illnesses, according to a new national scientific study that was partly led by a Northwestern University professor.
"Vaccines are among the most effective and safe public health interventions to prevent serious disease and death. Because of the success of vaccines, most Americans have no firsthand experience with such devastating illnesses as polio or diphtheria," according to the Institute of Medicine's report titled, "The Childhood Immunization Schedule and Safety: Stakeholder Concerns, Scientific Evidence, and Future Studies." The Institute of Medicine is an independent, nonprofit group that is the health arm of the National Academy of Science.
The report, which was released this month,comes as some parents and health activists have said that the vaccinations could cause health problems in children.
Dr. Paul Greenberger, professor in allergy-immunology at the Northwestern University Feinberg School of Medicine and one of the paper's authors, acknowledged these concerns.
"Vaccine safety is on a lot of people's minds all of the time, and identifying safety issues," Greenberger said.
But he and his fellow researchers found no cause for alarm about the schedule of vaccinations that pediatricians recommend for children. The researchers examined data on the vaccination' safety record produced by the federal Centers for Disease Control and Prevention and the Food and Drug Administration.
"We could not find evidence that the complete schedule is unsafe," Greenberger said. "We looked at chronic conditions, and found no evidence for a relationship between them and the complete composite schedule."
These chronic conditions include allergies, lupus, asthma and autism, he said. "It was very reassuring and should be very reassuring" for the public.
By the time that they start kindergarten, about 90 percent of children in the United States receive most of the age-appropriate vaccines suggested by the federal immunization schedule, according to the report.
That schedule, which the U.S. Advisory Committee on Immunization Practices prepared, can include one to five injections in a pediatric visit, with a total of 24 immunizations given by age 2.
Greenberger said researchers should continue to examine databases about vaccinations and children's health to get a more detailed picture of the safety record.
Greenberger and the study's co-authors recommend that the federal government do more to support the Vaccine Safety Datalink project, which was created in 1990 to monitor immunization safety and address research gaps in knowledge about serious and rare events which occur post-immunization.
The project, which is a collaborative effort between the Immunization Safety Office for the Centers for Disease Control and Prevention and nine managed care organizations, has compiled a large database about vaccinations and their medical outcomes.
"We recommended that the government fund and support the project. We also recommended that the (U.S.) Department of Health and Human Services consider expanding the Vaccine Safety Datalink partnerships," Greenberger said. "We thought the data could be useful for collecting data for additional studies and for ongoing efforts to learn about the schedule's safety."
Dr. Gary Freed, chief of the division of general pediatrics and director of the Child Health Evaluation and Research Unit for the University of Michigan Health System, said the Institute of Medicine's report was "thorough and well-done, and reached rational conclusions."
"I think the issue of vaccine safety is complex and has a long history of misinformation and unsubstantiated information, which has been presented by many different sources," Freed said. "It has been a danger that people from celebrities to pseudo-scientists disseminate information that's just plain wrong. As a result, some parents have not immunized their children, because they have been scared from incorrect information about vaccine safety."Copyright © 2015, Sun Sentinel | <urn:uuid:465d6c7e-1f7f-433d-b864-56b00c315a10> | CC-MAIN-2015-35 | http://www.sun-sentinel.com/health/ct-x-kid-vaccines-0130-20130130-story.html | s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644065324.41/warc/CC-MAIN-20150827025425-00071-ip-10-171-96-226.ec2.internal.warc.gz | en | 0.964789 | 775 | 3.09375 | 3 |
Welcome to the Indiana State Society
National Society United States Daughters of 1812
The National Society, United States Daughters of 1812, has established and maintains at 1461 Rhode Island Avenue, NW, in Washington, DC, the only museum dedicated to the historical period of 1784 to 1815, a relatively unknown era of our history. From a former mast of the U.S.S. Constitution which now serves as our flag pole, to powder horns, swords, scrimshaw, diaries and portraits, our museum reflects this exciting period. In 1928 the National Society purchased for their headquarters a three-story brick building. This Queen Anne Architecture, built in 1884 by Rear Admiral John H. Upshur (1823- 1917), has been lovingly maintained and today is in excellent condition. For over seventy five years, members and friends of the society have been adding to the collection of period furniture, books and artifacts.
Originally part of the Indiana Territory, Indiana did not become a state until December 11, 1816. Although still a territory during the War of 1812 and in the Midwest, there are many historic locations within the state relating to the War of 1812, as well as numerous battlefields, monuments, events and famous people.
The Treaty of Fort Wayne, 1809
This treaty set the stage for “Tecumseh’s War.”
The Battle of Tippecanoe, November 7, 1811
Tecumseh’s War. Although before the War of 1812, this battle was a major battle leading up to the War of 1812.
The Battle of Fort Harrison, September 4 – 5, 1812
A decisive victory and is considered the first land victory for the United States during the War of 1812.
The Siege of Fort Wayne, September 4 – 12, 1812
Indians from the Potawatomi and Miami tribes, led by Chief Winimac, undertook a campaign against Fort Wayne.
The Battle of the Mississinewa, December 17 – 18, 1812
An expedition ordered by William Henry Harrison against the Miami Indian villages in response to the attacks on Fort Wayne and Fort Harrison in the Indiana Territory.
William Henry Harrison: The first Governor of the Indiana Territory and the 9th President of the United States.
Tecumseh: Shawnee leader
Tenskwatawa: Tecumseh’s brother
Jonathan Jennings: The first Governor of Indiana, serving
from 1816 to 1822. | <urn:uuid:4ec4c0f1-629c-4c87-abbf-a8df83f3afdf> | CC-MAIN-2021-43 | https://usdaughters1812.org/sites/indiana/ | s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587593.0/warc/CC-MAIN-20211024173743-20211024203743-00284.warc.gz | en | 0.94232 | 523 | 2.625 | 3 |
Minn. Scientist Waiting To Inhale Air From Mars
MINNEAPOLIS (WCCO) — How’s the air on Mars? A Minnesota scientist hopes to find out, soon.
Heidi Manning, a physics professor at Concordia College in Moorhead, helped develop an instrument aboard the Mars rover that will take samples and identify chemicals on the red planet.
She’s hoping the device will be activated in the next few days.
“We’re going to get our first breath of martian air, and be able to analyze that and really precisely know the composition and the chemical makeup of that air that surrounds Mars,” she said.
Manning is on sabbatical and plans to spend the next year analyzing the info sent back from Curiosity. | <urn:uuid:15f5e189-6196-4a6d-8285-f160351401c9> | CC-MAIN-2015-22 | http://minnesota.cbslocal.com/2012/08/10/minn-scientist-waiting-to-inhale-air-from-mars/ | s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207929023.5/warc/CC-MAIN-20150521113209-00019-ip-10-180-206-219.ec2.internal.warc.gz | en | 0.926555 | 165 | 2.890625 | 3 |
Peer pressure says literacy isn't 'cool'
Following Ofsted's statement that it was a “moral imperative” for schools to improve reading and writing, new research by the Reader’s Digest has found that peer pressure, together with parents lack of interest, were significant causes of falling literacy.
The Reader's Digest survey on literacy levels interviewed both adults and children and found that just one in three (33%) children only “occasionally” read books, nearly one in five (18%) “hardly ever” read books and 5% never read a book. Nine out of ten (91%) of parents questioned were concerned about the country’s declining literacy levels and over a third (36%) blamed themselves for the worrying trend.
Results from the survey showed that:
- Nearly half (46%) of all children say they would benefit from their parents spending more time reading and writing with them
- 94% of parents felt that more should be done by society to encourage children to read books and write creatively in their leisure time to help improve their literacy levels
- 51% of parents have difficulty getting their children to read and write in leisure time
- One in ten admit they leave it up to their children to read and write stories in spare time
- Over half of parents (53%) strongly agreeing that there’s a clear value in encouraging children to read and write more to improve grammar and spelling (28%), increases creativity and imagination (25%) and gives children more confidence in their own abilities (19%)
Gill Hudson, editor of Reader's Digest, said: "This is the first time we have seen how children look at things. Their peer group is far more important than what their parents think.
- wigl – what is good leadership?
- wigt – what is good teaching?
- sandwell early numeracy test
- project-based learning resources
- creative teaching and learning
- school leadership and management
- every child
- professional development today
- learning spaces
- vulnerable children
- e-learning update
- leadership briefing
- manager's briefcase
- school business | <urn:uuid:2df69576-2b02-43d8-8c86-87d944a95058> | CC-MAIN-2016-50 | http://www.teachingtimes.com/articles/peer-literacy-levels.htm | s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698541529.0/warc/CC-MAIN-20161202170901-00504-ip-10-31-129-80.ec2.internal.warc.gz | en | 0.967924 | 441 | 2.8125 | 3 |
Economic theory is broadly applicable. However, a society’s property-rights structure influences how the theory will manifest itself. It’s the same with the theory of gravity. While it, too, is broadly applicable, attaching a parachute to a falling object affects how the law of gravity manifests itself. The parachute doesn’t nullify the law of gravity. Likewise, the property-rights structure doesn’t nullify the laws of demand and supply.
Property rights refer to who has exclusive authority to determine how a resource is used. Property rights are said to be communal when government owns and determines the use of a resource. Property rights are private when it’s an individual who owns and has the exclusive right to determine the non-prohibited uses of a resource and receive the benefit therefrom. Additionally, private-property rights confer upon the owner the right to keep, acquire, and sell the property to others on mutually agreeable terms.
Property rights might be well defined or ill defined. They might be cheaply enforceable or costly to enforce. These and other factors play a significant role in the outcomes we observe. Let’s look at a few of them.
A homeowner has a greater stake in the house’s future value than a renter. Even though he won’t be around 50 or 100 years from now, the house’s future housing services figure into its current selling price. Thus, homeowners tend to have a greater concern for the care and maintenance of a house than a renter. One of the ways homeowners get renters to share some of the interests of owners is to require security deposits.
Here’s a property-rights test question. Which economic entity is more likely to pay greater attention to wishes of its clientele and seek the most efficient methods of production? Is it an entity whose decision-makers are allowed to keep for themselves the monetary gain from pleasing clientele and seeking efficient production methods, or is it entities whose decision-makers have no claim on those monetary rewards? If you said it is the former, a for-profit entity, go to the head of the class.
While there are systemic differences between for-profit and non-profit entities, decision-makers in both try to maximize returns. A decision-maker for a non-profit will more likely seek in-kind gains such as plush carpets, leisurely work hours, long vacations, and clientele favoritism. Why? Unlike his for-profit counterpart, he doesn’t have property rights to take his gains. Also, since he can’t capture for himself the gains and doesn’t himself suffer the losses, there’s reduced pressure to please clientele and seek least-cost production methods.
You say, “Professor Williams, for-profit entities sometimes have plush carpets, have juicy expense accounts, and behave in ways not unlike non-profits.” You’re right, and again, it’s a property-rights issue. Taxes change the property-rights structure of earnings. If there’s a tax on profits, then taking profits in a money form becomes more costly. It becomes relatively less costly to take some of the gains in non-money forms.
It’s not just businessmen who behave this way. Say you’re on a business trip. Under which scenario would you more likely stay at a $50-a-night hotel and eat at Burger King? The first is where your employer gives you $1,000 and tells you to keep what’s left over. The second is where he tells you to turn in an itemized list of your expenses and he’ll reimburse you. In the first case, you capture for yourself the gains from finding the cheapest way of conducting the trip, and in the second, you don’t.
These examples are merely the tip of the effect that property-rights structure has on resource allocation. It’s one of the most important topics in the relatively new discipline of law and economics.
Copyright 2005 Creators Syndicate, Inc. (www.creators.com). Reprinted by permission.
This article originally appeared in the October 2005 edition of Freedom Daily. | <urn:uuid:28cf8f86-2139-459d-9ebf-6e451c70c51b> | CC-MAIN-2017-43 | https://www.fff.org/explore-freedom/article/economics-citizen-part-8/ | s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187821017.9/warc/CC-MAIN-20171017091309-20171017111309-00579.warc.gz | en | 0.946234 | 870 | 3 | 3 |
“Water is the driving force of all nature;” so said Leonardo da Vinci. The survival of our planet depends on the availability of water and the condition that it is in. It is therefore vital to make sure that it is able to provide a hospitable environment for those that live in it, and is free from harmful levels of substances for those that drink it.
Not monitoring water quality levels can have detrimental impacts on ecosystems and qualities of life. This can range for disrupting or damaging aquatic environments, to causing illness or even loss of human life in the most severe cases.
Today we are going to look at some of the most important indicators of water quality, as well as assessing where irregular levels of these can cause harm.
pH measures the concentration of hydrogen ions in water on a scale of 0 to 14 where 7 is neutral and anything above 7 is alkaline and anything below 7 is acidic. The pH of most natural water is between 6 and 8.5. Levels below 4.5 and above 9.5 are usually lethal to aquatic organisms. pH affects the solubility of organic compounds, metals, and salts. In highly acidic waters, certain minerals can dissolve and release metals and other chemical substances into the water. As pH or temperature rises, so too does the toxicity to aquatic organisms.
Levels of dissolved oxygen
Dissolved oxygen is essential for a healthy aquatic ecosystem. Fish and aquatic animals need the oxygen dissolved in the water to survive. As water temperature increases, the amount of oxygen that dissolves in water decreases. Ice-cold water can hold twice as much dissolved oxygen as warm water. The need for oxygen depends on the species and life stage; some organisms are adapted to lower oxygen conditions, while others require higher concentrations. Drastic alterations in the usual levels in dissolved oxygen can cause great amounts of damage to environmental ecosystems. This can be caused by a number of environmental impactors such as, streamflow runoff and changes in temperature.
Levels of total dissolved solids
The concentration of total dissolved solids (TDS) is a measure of the amount of dissolved material in water. TDS includes solutes such as sodium, calcium, magnesium, bicarbonate, and chloride that remain as a solid residue after the evaporation of water from the sample. Natural weathering, mining, industrial waste, sewage, and agriculture are some of the main sources of TDS. High levels of total dissolved solids make water less suitable for drinking and irrigation.
Levels of metals
A number of metals, such as copper, manganese, and zinc are essential to biochemical processes that sustain life. However, high concentrations of these and other metals in water can be toxic to animals and humans if they are ingested, or if they are found in animals that are then consumed by humans. Dissolved metals are generally more toxic than metals bound in complexes with other molecules. Metals can appear in water both naturally through weathering of rocks and soils, and unnaturally through industrial waste.
Levels of chemicals
Industrial waste can also introduce chemicals into water. Some industrial chemicals like polychlorinated biphenyls (PCBs) present a threat to aquatic ecosystems. PCBs cause a variety of serious health effects on the immune, reproductive, nervous, and endocrine systems. It is for this reason that PCBs were banned in the UK in the 1980s, previous to this they were used in a variety of manufacturing processes, particularly for electrical parts. Their high stability means they resist biological, chemical, and thermal degradation in the environment. As they are difficult to break down, high levels can still be found in waterways today.
Constant monitoring of all these indicators is needed to make sure that water remains safe to all those that use it. Here at Aquaread we offer a wide variety of water monitoring equipment for both portable and installed use, that has been tested to the highest standards by our skilled team of engineers, designers and scientists. | <urn:uuid:620cb059-495d-4aa0-b0cf-b7ac116c080c> | CC-MAIN-2017-51 | https://www.environmental-expert.com/articles/what-are-the-main-indicators-of-water-quality-715818 | s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948521292.23/warc/CC-MAIN-20171213045921-20171213065921-00001.warc.gz | en | 0.954263 | 809 | 3.78125 | 4 |
|Marine organisms face a variety of challenges in their
quest for life and reproduction. They must obtain food for growth and survival,
avoid being food for other organisms, cope with the physical environment,
and have an effective strategy for bringing forth reproducing offspring.
First we will look at Foraging Ecology, the quest for food. Obtaining food is necessary for an organisms survival, growth and offspring reproduction. Without food, almost nothing else about an organisms relation to the world will matter for long.
I. Obtaining food, necessary for: -survival -growth -offspring reproduction
There are many different strategies for obtaining the
necessary food. Organisms generally follow two basic strategies, being
either a generalist or a specialist. A specialist will prefer only a single
or few food types and a generalist will gladly eat many different food
types. Both of the strategies have their advantages and disadvantages.
A specialist may concentrate on a very nutritious food type; when it finds
it favored food, it will be able to obtain most of its nutritional requirements,
but it may have a hard time finding its favored food. A generalist may
be surrounded by many different edible food, but like spinach, it may not
be that good for the organisms, or it may take a long time to process the
food, such as eating barnacles with their hard shells.
Preference: Indication of foods chosen by a predator
given equal access to choices Average population preference study in food
choices of a sea urchin algal abundance avoidance preference Field Diet
**Equal Access to different algae, the sea urchin has preferred and avoided
Food preference in a gastropod with varying food abundances. Nucella % mussels eaten NO SWITCHING % mussels offered Acanthina barnacles food preference mussels SWITCHING relative food abundance *Strong food preferences may prevent food switching regardless of abundance.
Problem, under a given set of circumstances, how should a predator forage? How should a predator respond to variation in environment, such as patchiness of food supply. In such a case, what path should the predator follow.
To solve this problem, ecologists have come up with a model known as the Optimal Foraging Theory, in this model the predator will completely avoid the unprofitable, and completely pursue the profitable.
The model has four assumptions: 1. Foraging behavior is variable, and heritable. 2. Possible responses to prey are constrained 3. Most efficient foragers will be favored by natural selection 4. Efficiency determined by maximizing energy gained in a set amount of time
Two types of consumers are defined by their Energy/Time ratio (E/T).
2. Time Minimizer: Fixed energy goal -minimize time needed to obtain particular energy amount
The foragers face their first dilemma when they must decide
on the quality of the food item it will eat.
Optimal prey is the one where searching time and handling time is minimized
**assume energy is the same, cost is time consumed
Energy Gained VS Time Handling for five prey species E A D
Energy B Gained C
Time required to obtain and handle A--Best prey, most energy gained per unit of timeE--Worst prey, least energy gained per unit of time
Predictions from Model 1. Highest rank prey should always be eaten. 2. Lower rank prey should be pursued and eaten only if this increases net energy gain. Gains>Costs 3. Exception to 2, -Take lower rank prey if recognition time is low low rank prey may be eaten if frequently encountered. 4. Predators should be more selective when prey are abundant and less selective when prey are scarce. Search Time VS Handling Time of Scarce and Abundant Prey scarce prey handling time Time abundant prey A B CPrey Species D E Optimal shifts to the left when a prey species become abundant**assume energy is the same, cost is time consumed
Energy Gained VS Time Obtaining + Handling 5 prey species E D A Energy C Gained B Time required to obtain and handle prey Optimal prey shifts to the right with increasing abundance 5. Inclusion of lower ranked prey is independent of its own abundance and dependant on high ranked prey abundance.
Crab Caranus mearas eating mussel mytilus edulus Optimal size prey?energy obtained/ time spent handlingpreyUnlimited food#eaten/crab/dayLimited Food#eaten/crab/day 1 2 3 4mussel size 5 6cm 1 2 3 4mussel size 5 6cm 1 2 3 4mussel size 5 6cm
** This study shows that the snail will eat the best prey first, and save the lousy prey for last. Inclusion of the lower ranked prey is independent of its abundance and dependant on the abundance of the highest ranked prey.
The foragers second dilemma: How long should a forager stay within a patch? If it stays too long it will be spending too much time for the amount of energy it is accumulating.
Rate of energy extraction Time
Cumulative rate of energy extracted energy extracted predicted time spent in patch t t t t p o TIME assumption- patches all same size, equally spaced
Patch Quality Cumulative average energy high extracted low TIME
**A predator should leave quicker in a low quality patch than in a higher quality patch
Patch Quality Cumulative energy extracted t t t1 t2
**Time spent in a patch increases if travel time is long.
Reproduction is one of the primary goals of any organisms. In order to keep a species from becoming extinct, its members must reproduce at least enough offspring to replace themselves. Organisms invest a considerable amount of time and energy into reproducing. The eggs and sperm they produce are costly cells to make, and their only use is for reproduction. Some organisms also invest a great deal of energy in mate selection and nest preparation. In the case of the higher organisms, parental care may take place, and the organisms will be investing its resources into its offspring for even years to come. If an organism did not have to invest all of this energy into producing and nurturing offspring, they would have more energy available for investing into more complex organs allowing them to possibly be better competitors in its environment. But reproduction is an absolute necessity for organisms that are not immortal, and I do not know any that are. Without reproduction, the species would be gone after just one generation. Reproduction is not necessarily the most important thing to a particular creature, it may be happy to just spend its time swimming and eating, but it is the primary goal of the genes within the creature. The genes (the instructions for creating an organism) are the only level that evolution takes place. Good instructions for creating an organism that is good at reproducing get passed on to further generations. If the instructions were poor, the organism will not reproduce, and the instructions will be lost. Only the good instructions get passed on. Natural selection is very much at work at filtering out successful instructional strategies. One could say that an organism is only the means to continue the flow of genes. That is what natural selection has programmed the genes to do; to creature organisms that deal with the environment well and reproduce. A gene that programmed its organism to be poor at dealing with the environment would probably not be passed on to the next generation. Only those genes that create organisms that are successful in the environment get passed on. This filtering process (natural selection) soon leaves only genes that are good at producing fit organisms. If the environment stayed constant, then evolution would have finished millions of years ago, but the environment is not constant. The genes mutate every once in a few million reproductions. Most of these mutations will be deleterious to the organisms that is created, but some may actually benefit the organisms. Natural selection will work at filtering out the good from the bad mutations. If the mutation is bad, the gene will probably not be passed on (such as a mutation that hindered an organisms ability to eat, the organisms starves to death and does not reproduce). A few of the mutations may actually help the created organisms (such as a gene that allowed for better sight, the organisms may be able to see food that others were missing, or to avoid predation, giving the organisms an advantage). A good mutation would quickly be passed on throughout the species population, and this is evolution! Evolution selects for the best adaptive traits to insure that each female replaces herself within her lifetime, ideally, she should replace herself and then add some.
Evolution selects for the best adaptive traits to insure that each female replaces herself within her lifetime, ideally, she should replace herself and then add some.
Problems in the evolution of life history.
2. How many times should an individual reproduce?
3. How many eggs should there be per clutch?
4. How large should the eggs be?
5. When in the year should reproduction occur?
6. How to locate a mate?
7. How can young locate an appropriate habitat?
Factors influencing the production of maximum number of reproducing offspring.
I. Biology of Individual
1. Planktotrophy: very small and numerous eggs with little yolk. Eggs are of low cost to make, so many can be made. The larvae must feed in plankton column after hatching.
2. Lecithotrophy: relatively large, few, yolky,
and costly eggs. Some nursed. Larvae are non-feeding, simple in form. Found
in plankton, demersal, or benthic environments.
Marine Biology resources by Odyssey Expeditions Tropical Marine Biology Voyages | <urn:uuid:d6747288-3f8d-403b-81bb-8856dde93673> | CC-MAIN-2016-50 | http://www.marinebiology.org/marineecology.htm | s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698541864.44/warc/CC-MAIN-20161202170901-00113-ip-10-31-129-80.ec2.internal.warc.gz | en | 0.92238 | 1,993 | 4.125 | 4 |
This article needs additional citations for verification. (October 2014) (Learn how and when to remove this template message)
In computer science, a calling convention is an implementation-level (low-level) scheme for how subroutines receive parameters from their caller and how they return a result. Differences in various implementations include where parameters, return values, return addresses and scope links are placed (registers, stack or memory etc.), and how the tasks of preparing for a function call and restoring the environment afterward are divided between the caller and the callee.
Calling conventions may be related to a particular programming language's evaluation strategy but most often are not considered part of it (or vice versa), as the evaluation strategy is usually defined on a higher abstraction level and seen as a part of the language rather than as a low-level implementation detail of a particular language's compiler.
- 1 Variations
- 2 Implementation considerations
- 3 See also
- 4 References
- 5 External links
Calling conventions may differ in:
- Where parameters, return values and return addresses are placed (in registers, on the call stack, a mix of both, or in other memory structures)
- The order in which actual arguments for formal parameters are passed (or the parts of a large or complex argument)
- How a (possibly long or complex) return value is delivered from the callee back to the caller (on the stack, in a register, or within the heap)
- How the task of setting up for and cleaning up after a function call is divided between the caller and the callee
- Whether and how metadata describing the arguments is passed
- Where the previous value of the frame pointer is stored, which is used to restore the frame pointer when the routine ends (in the stack frame, or in some register)
- Where any static scope links for the routine's non-local data access are placed (typically at one or more positions in the stack frame, but sometimes in a general register, or, for some architectures, in special-purpose registers)
- How local variables are allocated can sometimes also be part of the calling convention (when the caller allocates for the callee)
In some cases, differences also include the following:
- Conventions on which registers may be directly used by the callee, without being preserved (otherwise regarded as an ABI detail)
- Which registers are considered to be volatile and, if volatile, need not be restored by the callee (often regarded as an ABI detail)
Although some[which?] languages actually may specify this partially in the programming language specification (or in some pivotal implementation), different implementations of such languages (i.e. different compilers) may typically still use various calling conventions, often selectable. Reasons for this are performance, frequent adaptation to the conventions of other popular languages (with or without technical reasons), and restrictions or conventions imposed by various "platforms" (combinations of CPU architectures and operating systems).
CPU architectures always have more than one possible calling convention.[why?] With many general-purpose registers and other features, the potential number of calling conventions is large, although some[which?] architectures are formally specified to use only one calling convention, supplied by the architect.
The x86 architecture is used with many different calling conventions. Due to the small number of architectural registers, the x86 calling conventions mostly pass arguments on the stack, while the return value (or a pointer to it) is passed in a register. Some conventions use registers for the first few parameters, which may improve performance for short and simple leaf-routines very frequently invoked (i.e. routines that do not call other routines and do not have to be reentrant).
push EAX ; pass some register result push byte[EBP+20] ; pass some memory variable (FASM/TASM syntax) push 3 ; pass some constant call calc ; the returned result is now in EAX
Typical callee structure: (some or all (except ret) of the instructions below may be optimized away in simple procedures)
calc: push EBP ; save old frame pointer mov EBP,ESP ; get new frame pointer sub ESP,localsize ; reserve stack space for locals . . ; perform calculations, leave result in EAX . mov ESP,EBP ; free space for locals pop EBP ; restore old frame pointer ret paramsize ; free parameter space and return
The standard 32-bit ARM calling convention allocates the 15 general-purpose registers as:
- r14 is the link register. (The BL instruction, used in a subroutine call, stores the return address in this register.)
- r13 is the stack pointer. (The Push/Pop instructions in "Thumb" operating mode use this register only.)
- r12 is the Intra-Procedure-call scratch register.
- r4 to r11: used to hold local variables.
- r0 to r3: used to hold argument values passed to a subroutine, and also hold results returned from a subroutine.
The 16th register, r15, is the program counter.
If the type of value returned is too large to fit in r0 to r3, or whose size cannot be determined statically at compile time, then the caller must allocate space for that value at run time, and pass a pointer to that space in r0.
Subroutines must preserve the contents of r4 to r11 and the stack pointer (perhaps by saving them to the stack in the function prologue, then using them as scratch space, then restoring them from the stack in the function epilogue). In particular, subroutines that call other subroutines must save the return address in the link register r14 to the stack before calling those other subroutines. However, such subroutines do not need to return that value to r14—they merely need to load that value into r15, the program counter, to return.
The ARM calling convention mandates using a full-descending stack.
This calling convention causes a "typical" ARM subroutine to:
- in the prologue, push r4 to r11 to the stack, and push the return address in r14 to the stack (this can be done with a single STM instruction);
- copy any passed arguments (in r0 to r3) to the local scratch registers (r4 to r11);
- allocate other local variables to the remaining local scratch registers (r4 to r11);
- do calculations and call other subroutines as necessary using BL, assuming r0 to r3, r12 and r14 will not be preserved;
- put the result in r0;
- in the epilogue, pull r4 to r11 from the stack, and pull the return address to the program counter r15. (This can be done with a single LDM instruction.)
- x30 is the link register (used to return from subroutines)
- x29 is the frame register
- x19 to x29 are callee-saved
- x18 is the 'platform register', used for some operating-system-specific special purpose, or an additional caller-saved register
- x16 and x17 are the Intra-Procedure-call scratch register
- x9 to x15: used to hold local variables (caller saved)
- x8: used to hold indirect return value address
- x0 to x7: used to hold argument values passed to a subroutine, and also hold results returned from a subroutine
The 32nd register, which serves as a stack pointer or as a zero register depending on the context, is referenced either as sp or xzr.
All registers starting with x have a corresponding 32-bit register prefixed with w. Thus, a 32-bit x0 is called w0.
The PowerPC architecture has a large number of registers so most functions can pass all arguments in registers for single level calls. Additional arguments are passed on the stack, and space for register-based arguments is also always allocated on the stack as a convenience to the called function in case multi-level calls are used (recursive or otherwise) and the registers must be saved. This is also of use in variadic functions, such as
printf(), where the function's arguments need to be accessed as an array. A single calling convention is used for all procedural languages.
The most commonly used calling convention for 32 bit MIPS is the O32 ABI which passes the first four arguments to a function in the registers $a0-$a3; subsequent arguments are passed on the stack. Space on the stack is reserved for $a0-$a3 in case the callee needs to save its arguments, but the registers are not stored there by the caller. The return value is stored in register $v0; a second return value may be stored in $v1. The 64 bit N64 ABI allows for more arguments in registers for more efficient function calls when there are more than four parameters. There is also the N32 ABI which also allows for more arguments in registers. The return address when a function is called is stored in the $ra register automatically by use of the JAL (jump and link) or JALR (jump and link register) instructions.
The N32 and N64 ABIs pass the first eight arguments to a function in the registers $a0-$a7; subsequent arguments are passed on the stack. The return value (or a pointer to it) is stored in the registers $v0; a second return value may be stored in $v1. In both the N32 and N64 ABIs all registers are considered to be 64-bits wide.
On both O32 and N32/N64 the stack grows downwards, but the N32/N64 ABIs require 64-bit alignment for all stack entries. The frame pointer ($30) is optional and in practice rarely used except when the stack allocation in a function is determined at runtime, for example, by calling
For N32 and N64, the return address is typically stored 8 bytes before the stack pointer although this may be optional.
For the N32 and N64 ABIs, a function must preserve the $S0-$s7 registers, the global pointer ($gp or $28), the stack pointer ($sp or $29) and the frame pointer ($30). The O32 ABI is the same except the calling function is required to save the $gp register instead of the called function.
For multi-threaded code, the thread local storage pointer is typically stored in special hardware register $29 and is accessed by using the mfhw (move from hardware) instruction. At least one vendor is known to store this information in the $k0 register which is normally reserved for kernel use, but this is not standard.
The $k0 and $k1 registers ($26–$27) are reserved for kernel use and should not be used by applications since these registers can be changed at any time by the kernel due to interrupts, context switches or other events.
|Name||Number||Use||Callee must preserve?|
|$v0–$v1||$2–$3||values for function returns and expression evaluation||No|
|$k0–$k1||$26–$27||reserved for OS kernel||N/A|
|$gp||$28||global pointer||Yes (except PIC code)|
|Name||Number||Use||Callee must preserve?|
|$v0–$v1||$2–$3||values for function returns and expression evaluation||No|
|$k0–$k1||$26–$27||reserved for OS kernel||N/A|
Registers that are preserved across a call are registers that (by convention) will not be changed by a system call or procedure (function) call. For example, $s-registers must be saved to the stack by a procedure that needs to use them, and $sp and $fp are always incremented by constants, and decremented back after the procedure is done with them (and the memory they point to). By contrast, $ra is changed automatically by any normal function call (ones that use jal), and $t-registers must be saved by the program before any procedure call (if the program needs the values inside them after the call).
The userspace calling convention of position-independent code on Linux additionally requires that when a function is called the $t9 register must contain the address of that function. This convention dates back to the System V ABI supplement for MIPS.
The SPARC architecture, unlike most RISC architectures, is built on register windows. There are 24 accessible registers in each register window: 8 are the "in" registers (%i0-%i7), 8 are the "local" registers (%l0-%l7), and 8 are the "out" registers (%o0-%o7). The "in" registers are used to pass arguments to the function being called, and any additional arguments need to be pushed onto the stack. However, space is always allocated by the called function to handle a potential register window overflow, local variables, and (on 32-bit SPARC) returning a struct by value. To call a function, one places the arguments for the function to be called in the "out" registers; when the function is called, the "out" registers become the "in" registers and the called function accesses the arguments in its "in" registers. When the called function completes, it places the return value in the first "in" register, which becomes the first "out" register when the called function returns.
IBM System/360 and successors
The IBM System/360 is another architecture without a hardware stack. The examples below illustrate the calling convention used by OS/360 and successors prior to the introduction of 64-bit z/Architecture; other operating systems for System/360 might have different calling conventions.
LA 1,ARGS Load argument list address L 15,=A(SUB) Load subroutine address BALR 14,15 Branch to called routine1 ... ARGS DC A(FIRST) Address of 1st argument DC A(SECOND) ... DC A(THIRD)+X'80000000' Last argument2
SUB EQU * This is the entry point of the subprogram
Standard entry sequence:
USING *,153 STM 14,12,12(13) Save registers4 ST 13,SAVE+4 Save caller's savearea addr LA 12,SAVE Chain saveareas ST 12,8(13) LR 13,12 ...
Standard return sequence:
L 13,SAVE+45 LM 14,12,12(13) L 15,RETVAL6 BR 14 Return to caller SAVE DS 18F Savearea7
BALRinstruction stores the address of the next instruction (return address) in the register specified by the first argument—register 14—and branches to the second argument address in register 15.
- The caller passes the address of a list of argument addresses in register 1. The last address has the high-order bit set to indicate the end of the list. This limits programs using this convention to 31-bit addressing.
- The address of the called routine is in register 15. Normally this is loaded into another register and register 15 is not used as a base register.
STMinstruction saves registers 14, 15, and 0 thru 12 in a 72-byte area provided by the caller called a save area pointed to by register 13. The called routine provides its own save area for use by subroutines it calls; the address of this area is normally kept in register 13 throughout the routine. The instructions following
STMupdate forward and backward chains linking this save area to the caller's save area.
- The return sequence restores the caller's registers.
- Register 15 is usually used to pass a return value.
- Declaring a savearea statically in the called routine makes it non-reentrant and non-recursive; a reentrant program uses a dynamic savearea, acquired either from the operating system and freed upon returning, or in storage passed by the calling program.
- Registers 0 and 1 are volatile
- Registers 2 and 3 are used for parameter passing and return values
- Registers 4 and 5 are also used for parameter passing
- Register 6 is used for parameter passing, and must be saved and restored by the callee
- Registers 7 through 13 are for use by the callee, and must be saved and restored by them
- Register 14 is used for the return address
- Register 15 is used as the stack pointer
- Floating-point registers 0 and 2 are used for parameter passing and return values
- Floating-point registers 4 and 6 are for use by the callee, and must be saved and restored by them
- In z/Architecture, floating-point registers 1, 3, 5, and 7 through 15 are for use by the callee
- Access register 0 is reserved for system use
- Access registers 1 through 15 are for use by the callee
|Register||Windows CE 5.0||gcc||Renesas|
|R0||Return values. Temporary for expanding assembly pseudo-instructions. Implicit source/destination for 8/16-bit operations. Not preserved.||Return value, caller saves||Variables/temporary. Not guaranteed|
|R1..R3||Serves as temporary registers. Not preserved.||Caller saved scratch. Structure address (caller save, by default)||Variables/temporary. Not guaranteed|
|R4..R7||First four words of integer arguments. The argument build area provides space into which R4 through R7 holding arguments may spill. Not preserved.||Parameter passing, caller saves||Arguments. Not guaranteed.|
|R8..R13||Serves as permanent registers. Preserved.||Callee Saves||Variables/temporary. Guaranteed.|
|R14||Default frame pointer. (R8-R13 may also serve as frame pointer and leaf routines may use R1–R3 as frame pointer.) Preserved.||Frame Pointer, FP, callee saves||Variables/temporary. Guaranteed.|
|R15||Serves as stack pointer or as a permanent register. Preserved.||Stack Pointer, SP, callee saves||Stack pointer. Guaranteed.|
|Wikibooks has a book on the topic of: 68000 Assembly|
- d0, d1, a0 and a1 are scratch registers
- All other registers are callee-saved
- a6 is the frame pointer, which can be disabled by a compiler option
- Parameters are pushed onto the stack, from right to left
- Return value is stored in d0
The IBM 1130 was a small 16-bit word-addressable machine. It had only six registers plus condition indicators, and no stack. The registers are Instruction Address Register (IAR), Accumulator (ACC), Accumulator Extension (EXT), and three index registers X1–X3. The calling program is responsible for saving ACC, EXT, X1, and X2. There are two pseudo-operations for calling subroutines,
CALL to code non-relocatable subroutines directly linked with the main program, and
LIBF to call relocatable library subroutines through a transfer vector. Both pseudo-ops resolve to a Branch and Store IAR (
BSI) machine instruction that stores the address of the next instruction at its effective address (EA) and branches to EA+1.
Arguments follow the
BSI—usually these are one-word addresses of arguments—the called routine must know how many arguments to expect so that it can skip over them on return. Alternatively, arguments can be passed in registers. Function routines returned the result in ACC for real arguments, or in a memory location referred to as the Real Number Pseudo-Accumulator (FAC). Arguments and the return address were addressed using an offset to the IAR value stored in the first location of the subroutine.
* 1130 subroutine example ENT SUB Declare "SUB" an external entry point SUB DC 0 Reserved word at entry point, conventionally coded "DC *-*" * Subroutine code begins here * If there were arguments the addresses can be loaded indirectly from the return addess LDX I 1 SUB Load X1 with the address of the first argument (for example) ... * Return sequence LD RES Load integer result into ACC * If no arguments were provided, indirect branch to the stored return address B I SUB If no arguments were provided END SUB
This variability must be considered when combining modules written in multiple languages, or when calling operating system or library APIs from a language other than the one in which they are written; in these cases, special care must be taken to coordinate the calling conventions used by caller and callee. Even a program using a single programming language may use multiple calling conventions, either chosen by the compiler, for code optimization, or specified by the programmer.
Threaded code places all the responsibility for setting up for and cleaning up after a function call on the called code. The calling code does nothing but list the subroutines to be called. This puts all the function setup and cleanup code in one place—the prolog and epilog of the function—rather than in the many places that function is called. This makes threaded code the most compact calling convention.
Threaded code passes all arguments on the stack. All return values are returned on the stack. This makes naive implementations slower than calling conventions that keep more values in registers. However, threaded code implementations that cache several of the top stack values in registers—in particular, the return address—are usually faster than subroutine calling conventions that always push and pop the return address to the stack.
This section does not cite any sources. (May 2016) (Learn how and when to remove this template message)
The default calling convention for programs written in the PL/I language passes all arguments by reference, although other conventions may optionally be specified. The arguments are handled differently for different compilers and platforms, but typically the argument addresses are passed via an argument list in memory. A final, hidden, address may be passed pointing to an area to contain the return value. Because of the wide variety of data types supported by PL/I a data descriptor may also be passed to define, for example, the lengths of character or bit strings, the dimension and bounds of arrays (dope vectors), or the layout and contents of a data structure. Dummy arguments are created for arguments which are constants or which do not agree with the type of argument the called procedure expects.
- "Procedure Call Standard for the ARM Architecture" 2008
- "ARM Cortex-A Series Programmer's Guide for ARMv8-A, §9.1.1. Parameters in general-purpose registers". ARM Developer. Retrieved 7 October 2018.
- Sweetman, Dominic. See MIPS Run, 2nd edition. Morgan Kaufmann. ISBN 0-12088-421-6.
- "MIPS32 Instruction Set Quick Reference".
- Karen Miller. "The MIPS Register Usage Conventions". 2006.
- Hal Perkins. ""MIPS Calling Convention". 2006.
- MIPSpro N32 ABI Handbook (PDF). Silicon Graphics.
- "PIC code – LinuxMIPS". www.linux-mips.org. Retrieved 2018-09-21.
- "System V Application Binary Interface MIPS RISC Processor Supplement, 3rd Edition" (PDF). pp. 3–12.
- System V Application Binary Interface SPARC Processor Supplement (3 ed.).
- "S/390 ELF Application Binary Interface Supplement".
- "zSeries ELF Application Binary Interface Supplement".
- Dr. Mike Smith. "SHARC (21k) and 68k Register Comparison".
- XGCC: The Gnu C/C++ Language System for Embedded Development (PDF). Embedded Support Tools Corporation. 2000. p. 59.
- "COLDFIRE/68K: ThreadX for the Freescale ColdFire Family". Archived from the original on 2015-10-02.
- Andreas Moshovos. "Subroutines Continued: Passing Arguments, Returning Values and Allocating Local Variables".
all registers except d0, d1, a0, a1 and a7 should be preserved across a call.
- IBM Corporation (1967). IBM 1130 Disk Monitor System, Version 2 System Introduction (C26-3709-0) (PDF). p. 67. Retrieved December 21, 2014.
- IBM Corporation (1968). IBM 1130 Assembler Language (C26-5927-4) (PDF). pp. 24–25.
- Mark Smotherman. "Subroutine and procedure call support: Early history". 2004.
- Brad Rodriguez. "Moving Forth, Part 1: Design Decisions in the Forth Kernel". quote: "On the 6809 or Zilog Super8, DTC is faster than STC."
- Anton Ertl. "Speed of various interpreter dispatch techniques".
- Mathew Zaleski. "YETI: a graduallY Extensible Trace Interpreter". 2008. Chapter 4: Design and Implementation of Efficient Interpretation. quote: "Although direct-threaded interpreters are known to have poor branch prediction properties... the latency of a call and return may be greater than an indirect jump."
|The Wikibook Embedded Systems has a page on the topic of: Mixed C and Assembly Programming|
|The Wikibook X86 Disassembly has a page on the topic of: Calling Conventions|
- S. C. Johnson, D. M. Ritchie, Computing Science Technical Report No. 102: The C Language Calling Sequence, Bell Laboratories, September, 1981
- Introduction to assembly on the PowerPC
- Mac OS X ABI Function Call Guide
- Procedure Call Standard for the ARM Architecture
- Embedded Programming with the GNU Toolchain, Section 10. C Startup | <urn:uuid:e08154e7-2e7a-4fd0-bee1-9136c39bf93a> | CC-MAIN-2019-30 | https://en.wikipedia.org/wiki/Calling_convention | s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195528013.81/warc/CC-MAIN-20190722113215-20190722135215-00020.warc.gz | en | 0.842232 | 5,543 | 3.4375 | 3 |
33,000 manufacturing jobs and a 27% decline in Australia’s chemical, rubber and plastics sector is at stake if Australia’s energy prices continue to climb, according to a new report released by the University of Sydney’s United States Studies Centre.
Entitled, “It Doesn’t Have to Be This Way: Australia’s Energy Crisis, America’s Energy Surplus,” the report reveals that Australia’s energy crisis goes beyond rising electricity bills and could have a devastating impact on the entire economy, impacting jobs and investment, and raising the prices of goods and services for households.
The report, which was backed by Dow Chemical and Chemical Australia, has found that Australian households and businesses pay dramatically more for their energy than their American counterparts — two to three times as much in many cases.
“An essential and irreplaceable input to many types of manufacturing, natural gas is now more than twice as expensive for Australian manufacturers as it is for their counterparts in the US state of Louisiana,” reads the report.
“And compared to 10 years ago, in nominal terms natural gas is now 177 per cent more expensive for a Melbourne-based manufacturer and 41 per cent cheaper for a New York-based manufacturer.”
The report, which notes that if Australia was a US state, it would be one of the most uncompetitive jurisdictions in terms of energy prices, also proposes several long-term fixes to drive down energy prices in Australia.
Ending moratoria on conventional and unconventional gas development, directly subsidising gas infrastructure expenditure (new pipelines, import terminals) and reforming the market architecture around gas pipelines and distribution networks, are some of the recommendations listed in the report.
The report also notes that getting the institutional and policy settings right will transform Australia’s resource abundance into economic abundance for the country, putting downward pressure on energy prices and emissions.
“In summary, the key lesson for Australia from the US energy revolution is that resource abundance will only get it so far,” it says in the report.
“Institutional arrangements, infrastructure development and regulatory and policy settings all matter for economic outcomes.”
“In the case of energy, getting these things right means avoiding billions of dollars in additional costs for households and businesses, more jobs and investment, higher wages and lower emissions.”
The full report is available here. | <urn:uuid:fd0cfc92-dbe1-4c2f-aee3-0a50cacfdfdd> | CC-MAIN-2021-25 | https://www.australianmanufacturing.com.au/63160/australias-energy-crisis-threatens-industry-according-to-report | s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488551052.94/warc/CC-MAIN-20210624045834-20210624075834-00190.warc.gz | en | 0.942589 | 496 | 2.671875 | 3 |
How Does Channel Width Affect 802.11ac Interference?
In my previous post, I discussed the two classifications of interference on your wireless network: non-Wi-Fi sources and Wi-Fi sources. In this post, I focus on a source of Wi-Fi interference specific to 802.11ac and how it is mitigated with current technology.
802.11ac brings its own challenge that is not often discussed: Overlapping BSS (OBSS). OBSS is defined as two BSSs that, well, overlap, because their channel configurations occupy some of the same space.
Consider the following scenario:
Two APs can hear each other. AP1 is set to use channel 36+40+44+48 (primary channel 36+40) while AP2 is set to use channel 44+48. Due to the nature of 802.11 and co-channel interference, AP1 cannot transmit while AP2 is transmitting.
In this scenario, the APs will forever dance the dance of CCI, waiting for the other to finish transmitting before taking over the medium itself. Wouldn’t it be better if instead of waiting, they could both transmit at the same time?
Figure 1: OBSS Visualized
802.11ac has a built in mechanism to help mitigate the issue of OBSS. AP1 could simply utilize the other 40 MHz that is not occupied by AP2. AP1 has its primary channel set to 36+40, meaning that while AP2 is transmitting on 44+48, AP1 can transmit data simultaneously!
Be wary, though, in this scenario when both APs are transmitting, the throughput on AP1 is half of the throughput compared to when only AP1 is transmitting. This is because AP1 switches from an 80 MHz channel width down to a 40 MHz channel width. However, while the individual throughput might be lessened, the overall throughput of the system is greater.
Lastly, remember that OBSS doesn’t just occur with 80 and 40 MHz channels. Many high density deployments are choosing to not use channel bonding in order to provide added connectivity instead of added throughput. This means that you might run into a 20 MHz channel overlapping your 40 or 80 MHz deployment.
Don’t worry, though, 802.11ac can shrink the channel width all the way down to 20 MHz in order to “play nice”. That means that if you set your AP to 36+40+44+48 with 36 being the primary channel and you hear another AP on channel 40, your AP will shrink its own channel width down to channel 36 in order to continuously transmit.
Figure 2: 802.11ac primary channel assignments
When designing your wireless network, take into consideration Wi-Fi sources of interference and deploy accordingly. If you are not achieving your expected throughput, Wi-Fi might be the cause. APs on the same channel (CCI) could be severely degrading network performance. Even APs that are broadcasting too loudly on other channels (ACI) can generate issues.
Lastly, beware OBSS. Even though there are mechanisms built in to suppress the effects of OBSS, you might still end up with those pesky support calls that we all dread. Hopefully, at the end of the day, someone will finally compliment you on how stable the wireless is.
All Posts In This Series: | <urn:uuid:ed7594e7-1168-48bc-96a7-40a75162e92c> | CC-MAIN-2019-13 | https://blog.aerohive.com/how-does-channel-width-affect-802-11ac-interference/ | s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912202572.7/warc/CC-MAIN-20190321213516-20190321235516-00404.warc.gz | en | 0.938563 | 698 | 2.71875 | 3 |
10 Things You Didn’t Know About Black Holes
Stars whose size is 10 or 15 times as massive as the Sun, generally have a fate of becoming black holes. Small stars, however, die as white dwarfs or neutron stars. So how do large stars become black holes? As stars begin to grow old, they gradually expand and they slowly run out of their supply of hydrogen and then helium. This causes contraction of their cores and expansion of the outer layers. The stars start becoming cooler and less bright and they come to a stage which is known as the red giant phase. Now, for a star which is even 3 times or more the mass of the Sun, undergoes detonation (a violent release of energy caused by a chemical or nuclear reaction) in cataclysms known as supernovae. Such kind of explosion results in scattering of most of a star into the space. However, it also leaves behind a cold remnant of the star, which is no longer able to execute any nuclear fusion reaction.
As there is no fusion in the dead remnants of a massive supernova star, there is no creation of any energy which can oppose the inward pull of gravity caused by the star’s own mass. Thus, the star enters a phase where it begins to collapse in upon itself. This is the formation of the black hole, wherein, it starts shrinking to zero volume. So, with volume being zero, density becomes infinite, so much that even light becomes unable to escape its massively strong gravitational pull. As a result of this, even the light of the dead remnants of the star gets trapped in its orbit and this dark star evolves to become what is known as a black hole.
- It has been estimated that there might be black holes of enormous size, which may be existing at the center of our galaxy, ‘The Milky Way’. These holes are assumed to be having the mass of 10 - 100 billion suns. Now, that is something which is ‘HUGE’, in block letters!
- Cygnus X-1 is the black hole that is located about 8000 light years away from our planet Earth. This is the closest black hole to Earth, known to man.
- Although, black holes are associated with the reputation of having the strongest suction force, they do not bear the capability to absorb the whole universe. Anything such as planets, light and other matter, can be pulled into the grasp of black holes, only if they happen to cross what is known as the event horizon. The radius of this event horizon is known as the Schwarzschild radius and at this radius, the escape velocity equals the speed of the light. So, once an object has passed through it, it must travel faster than light in order to escape it. That is the reason why, even light cannot escape the event horizon of a black hole.
- As mentioned earlier, in this part of black holes in space, only the largest of stars are capable to end up as black holes. Only these stars are massive enough to get compressed to the Schwarzschild radius. While, smaller stars end up as white dwarfs or neutron stars.
- There are several black holes which exist in binary star systems. Stars which are neighboring such holes, will keep on shrinking as their mass will continually be pulled by these holes. Gradually, the black holes will go on increasing, until the other stars have vanished.
- As light cannot escape from a black hole, it cannot be directly observed. However, scientists use the presence of matters which swirl around the hole. Such matters are usually gas and dust and they heat up and emit radiation which can be detected.
- Talking about our Sun becoming a black hole, the phenomenon won’t occur. This is because the sun is not massive enough to shrink into a black hole. However, it will end up to become a white dwarf, after several billion years.
- The center of a black hole is void of time and space.
- A giant elliptical galaxy in the constellation Virgo, is assumed to home the largest known black hole. This hole is about 3 billion times the mass of the Sun.
- Larger black holes are known to suck up other smaller ones which are close to their vicinity.
No matter, how many facts people come up with, black holes represent an endless journey in the vast darkness of the space. The concept which lay hidden in the lap of black holes, perhaps, is the most appropriate analogy to the saying ‘sky is the limit!’
In the image: Simulation of gas cloud approaching the black hole at the center of the Milky Way7,132 notes
Posted on Saturday, 21 July
Tagged as: 1k Astronomy Black Hole gif science space trivias
- moonbeamfantasy reblogged this from expose-the-light
- moonbeamfantasy likes this
- youwillforgetsoon likes this
- hotboxed4life reblogged this from organicalchemist
- no-hewasfrench reblogged this from pyotrquill
- no-hewasfrench likes this
- pyotrquill reblogged this from organicalchemist
- hlaar likes this
- godoid reblogged this from organicalchemist
- organicalchemist reblogged this from physicsshiny
- hiddenembers reblogged this from fulfilledpromise
- miros-musings reblogged this from schrodingersmeower
- gayfandomblog likes this
- nerdstuffandknitthings likes this
- gallifreyal reblogged this from schrodingersmeower
- space-wanderer-girl reblogged this from physicsshiny
- space-wanderer-girl likes this
- dragonace reblogged this from manolowar
- fistiecuffs reblogged this from manolowar
- manolowar reblogged this from fulfilledpromise
- blue-dott reblogged this from schrodingersmeower
- caosanddr-pepperformyhealth likes this
- pitofexistentialdespair likes this
- redstarofhope likes this
- cptstevenrogers likes this
- phoenixintheembers reblogged this from physicsshiny
- itsadreamersworld reblogged this from schrodingersmeower
- schrodingersmeower reblogged this from physicsshiny
- xempressx reblogged this from physicsshiny
- fulfilledpromise reblogged this from shinyshiney
- semtitulox3 likes this
- monochronological reblogged this from physicsshiny
- physicsshiny reblogged this from shinyshiney
- captain-kate likes this
- shinyshiney reblogged this from nerocaesar
- scientificalparadox reblogged this from expose-the-light
- darcythelunatic likes this
- nerocaesar reblogged this from expose-the-light
- knife-master-mustve-stabbed-her likes this
- simplymejb reblogged this from expose-the-light
- verowbono reblogged this from expose-the-light
- doyoubelieveinmagic0x0 reblogged this from expose-the-light and added:
- scenariot reblogged this from expose-the-light
- roxyannemarie-like-me-or-not reblogged this from expose-the-light
- roxyannemarie-like-me-or-not likes this
- wheredidmypostgo likes this
- petraverda reblogged this from expose-the-light
- atom-at-the-ministry-of-science reblogged this from expose-the-light
- atom-at-the-ministry-of-science likes this
- fractalfortress reblogged this from dustygultch | <urn:uuid:c6219c32-d7b3-4862-9131-a245080b7726> | CC-MAIN-2014-41 | http://expose-the-light.tumblr.com/post/27723432070 | s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657134511.2/warc/CC-MAIN-20140914011214-00171-ip-10-234-18-248.ec2.internal.warc.gz | en | 0.926206 | 1,688 | 4.28125 | 4 |
Extracts from “Coral Crucible” by Din Silcock – Airlines PNG inflight magazine vol 22, 2012
Surveys by famous marine biologists like Professor Charles Veron and Dr Jerry Allen and respected organisations like The Nature Conservancy, have helped to establish a bewildering array of statistics for the area.
Kimbe Bay s host to around 860 species of reef fish, 400 species of coral and at least 10 species of whales and dolphins.
To put that in a global perspective – in an area roughly the same size as Carlifonia – Papua New Guinea is home to almost five percent of the worlds marine biodiversity.
ust under half of that fish fauna and virtually all of the coral species can be found in Kimbe Bay, which means that the bay should really be considered as a kind of fully stocked marine biological storehouse.
Bounded by the long Williaumez Peninsula to the west and Cape Toroko some 140km to the east, Kimbe Bay is sheltered from the worst of new Britain’s weather.
Along the coastal area of the bay, a 200m shelf runs parallel to the shore for about 5km before dropping down to about 500m and up to 1,000m in the eastern part. On the norhtern outskirts of the bay, as it approaches the Bismarck Sea, the sea floor drops off rapidly to excess of 2,000m.
Across the deep seascape are dramatic sea-mounts and coral pinnacles that rise up towards the surface and rpovide isolated ecosystems for the marine creatures of the bay.
The sea-mounts in particular act as beacons to the bay’s diverse and prolific pelagics and marine mammals – with twelve species of mammals identified to date, including sperm whales, orcas, spinner dolphins and duogong!
The deep waters and generally benign conditions functions as a kind of marine nursery and are fundamental to the incredible biodiversity of Kimbe Bay, but the other significant element is the nutrient-rich currents of the Bismarck Sea that provides the nutrients to sustain the bays residents and visitors.
To the south of the New Britain are the 4,000m-deep basins of the Solomon sea that the Southern Equatorial Current crosses as it makes its way to the Bismarck Archipelago.
As those powerful currents flow along the north coast of New Britain and around the top of the long and narrow Williaumez Peninsula, eddies are produced in the western part of Kimbe Bay that direct the nutrient rich flows into the bay and induce further upwellings from the deep water basins to the north.
In a nutshell, the forces of nature have combined to produce an almost perfect natural environment to create and sustain the coral crucible and the creatures that cohabit with it. | <urn:uuid:fb33e218-b7bc-4844-9433-ff706101398a> | CC-MAIN-2022-40 | https://bomaicruz.southernfriedscience.com/?p=631 | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337371.9/warc/CC-MAIN-20221003003804-20221003033804-00134.warc.gz | en | 0.925504 | 585 | 2.828125 | 3 |
Children aged 2 months through 16 years should be vaccinated against Japanese encephalitis (JE) if they are going to be traveling to areas that are endemic for the disease.
A panel of 15 immunization experts convened by the Centers for Disease Control and Prevention (CDC), in Atlanta, voted unanimously to add the younger age group to the existing recommendations for travelers aged 17 years or older, which were made in 2009.
The request to add young children to the recommendations was put before the CDC's Advisory Committee on Immunization Practices (ACIP) by Marc Fischer, MD, medical epidemiologist with the CDC’s Arboviral Diseases Branch in Fort Collins, Colorado.
Fatal Disease With No Treatment
Japanese encephalitis is endemic to Asia and the Western Pacific and is transmitted by mosquitoes.
"There is no treatment for Japanese encephalitis and it is fatal in 20 to 30 percent of people who get the infection, and about a third to a half of the survivors have some neurologic, cognitive or behavioural sequelae," Dr. Fischer told Medscape Medical News.
He presented data to the ACIP panel that demonstrated that the JE vaccine, which was licensed for use in children aged 2 months or older in May of this year, is safe and effective.
The vaccine is marketed by Novartis as IXIARO.
"This is a safe vaccine, it has a low incidence of serious adverse events and in the studies that we reviewed, which included 3 in children and 9 in adults, the serious adverse events were similar to those seen in Prevnar and Havrix, the comparison vaccines," Dr. Fischer said.
In the period between 1973 and 2012, there have been 65 cases (59 adults, 6 children) of JE among US travelers.
Eleven (19%) adults and 2 (33%) children died, 25 (42%) adults and 3 (50%) children survived with sequelae, and 15 (25%) adults and zero children had no sequelae. Details of the remaining 8 adults and 1 child are unknown, Dr. Fischer said.
Itineraries for 47 of the travel-associated cases revealed that the majority (30, or 64%) had a travel duration of 1 month or more, 13 (27%) traveled for 2 to 4 weeks, and 4 (9%) traveled for 1 to 2 weeks.
There were no cases of JE reported in short-term travelers visiting urban areas only. However, travel to rural areas was dangerous, with 17 individuals getting infected.
Relative Risk Is Low
The JE disease risk for most travelers is very low and varies on location, duration of travel, the season, and travel activities, Dr. Fischer said.
In addition, the vaccine, although safe and effective, is very expensive.
"These are all factors to be taken into account when making recommendations for who gets vaccinated," he said.
Vaccinating children living in endemic areas is cost-saving, but the JE vaccine for all travelers to Asia would not be cost effective, he said.
"Over 5 and a half million Americans travel to Asia each year, and the overall risk of Japanese encephalitis is low, less than 1 case per million travelers. Also, the cost of the vaccine is high, at $200 to $250 a dose. For some travelers, even a low risk of serious adverse events due to the vaccine may be higher than the risk for disease. Because of these factors, we think the JE vaccine should be targeted to travelers who are at increased risk for disease based on their planned itinerary," Dr. Fischer said.
Including Kids the Only Change
The current recommendations for vaccination stand, with the only addition the inclusion of children aged over 2 months.
As confirmed by the ACIP panel, these are as follows:
- The JE vaccine is recommended for travelers who plan to spend a month or longer in endemic areas during the JE virus transmission season. This includes long-term travelers, recurrent travelers, or expatriates who will be based in urban areas but are likely to visit endemic rural or agricultural areas during a high-risk period of JE virus transmission (Recommendation category A)
- The JE vaccine should be considered for short-term (less than1 month) travelers to endemic areas during the JE virus transmission season if they plan to travel outside of an urban area and have an increased risk for JE virus exposure (eg, spending substantial time outdoors in rural or agricultural areas, participating in extensive outdoor activities, staying in accommodations without air conditioning, screens, or bed nets). The JE vaccine should also be considered for travelers to an area with an ongoing JE outbreak and those traveling to endemic areas who are uncertain of specific destinations, activities, or duration of travel (Recommendation category B).
- The JE vaccine is not recommended for short-term travelers whose visit will be restricted to urban areas or times outside of a well-defined JE virus transmission season (Recommendation Category A).
Dr. Fischer has disclosed no relevant financial relationships.
CDC's Advisory Committee on Immunization Practices Meeting: June 19, 2013. | <urn:uuid:224dcc5e-805d-4878-924c-40ed48981938> | CC-MAIN-2018-13 | http://pediatricianinhouse.blogspot.ru/2013/06/ | s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257644701.7/warc/CC-MAIN-20180317055142-20180317075142-00631.warc.gz | en | 0.965106 | 1,051 | 2.90625 | 3 |
There is a small butterfly-shaped gland located at the base of your neck. That little gland is considered the most important hormone gland in your body, and it would be very difficult for other hormone glands to function without it. It is the control center responsible for metabolic function of every cell in your body. What is it? It’s your thyroid gland.
Knowing this, it likely comes as no shock that having a healthy thyroid is important to your overall health. Unfortunately, knowing when your thyroid isn’t performing to the best of its abilities can be difficult. Here’s what you need to know to diagnose and treat hypothyroidism before it wreaks too much havoc on your overall health.
Symptoms of an Unhealthy Thyroid
When the thyroid does not produce enough thyroid hormone, every part of your body will pay the price. In fact, you may be familiar with some of the major symptoms. Do any of these statements describe you?
- You continue to gain weight without much of an appetite.
- You’re constantly tired, but you can’t sleep.
- You are easily depressed and irritable.
- Your joints and muscles always ache.
- You feel cold all the time (even in the heat).
I think you can agree that this doesn’t sound like a healthy person. But the sad and unfortunate reality is that your doctor could easily miss a diagnosis of hypothyroidism.
Say you suspect you may have a thyroid problem, and your doctor gives you the thyroid stimulating hormone (TSH) blood test. (This test indicates hypothyroidism when thyroid hormone levels are low, and TSH levels are high. The pituitary gland is sometimes the problem when thyroid hormone and TSH levels are both low.) Your blood test reveals the normal range—between 0.5 and 4.5 mIU/L (milli-international units per liter). You are healthy then, right? Not so fast. Hypothyroidism is one of those tricky conditions that can be hard to detect with a simple blood test.
Do You Have Hypothyroidism 1 or 2?
Dr. Mark Starr is considered a leading hormonal health expert and is the author of Type 2 Hypothyroidism: The Epidemic. Wait, what? There’s a type 2 hypothyroidism, and it’s an epidemic? It’s true.
According to Dr. Starr, type 1 hypothyroidism is considered when the thyroid gland does not produce sufficient thyroid hormones for “normal” blood levels. The pituitary gland also fails to produce “normal” TSH blood levels. The TSH blood test will only diagnose two to five percent of hypothyroidism patients.
So what about type 2 hypothyroidism? Things are a little more complicated when it comes to type 2 hypothyroidism as a lack of thyroid hormone is not the issue. In fact, TSH blood tests often reveal normal amounts of TSH and thyroid hormones. Type 2 hypothyroidism is often an inherited condition, and there isn’t a blood test that will detect it.
So, what is the solution? It may be time to consider a holistic or naturopathic doctor. They will review the patient’s personal and family medical history. A saliva test or comprehensive blood thyroid panel may also help uncover your type 2 hypothyroidism.
The Basal Body Temperature Approach: Diagnosing Hypothyroidism
A basal body temperature test is also considered a very effective method for type 2 hypothyroidism diagnostics. It is a simple test that you can even perform in the comfort of your home.
In the morning, place a digital thermometer under you armpit, and hold it firmly there for 10 minutes while laying perfectly still; the thermometer reading may be altered from the slightest movement. After 10 minutes have passed, record your reading. Repeat this procedure for three days, then review your average reading.
It is an indication of hypothyroidism when the average reading ranges between 97.07 and 98.2 degrees Fahrenheit. It is estimated that 40% of North Americans suffer from type 2 hypothyroidism when the basal body temperature test is considered.
Other Ways to Detect Type 2 Hypothyroidism
The holistic doctor will also consider your symptoms—all of them. The following are some type 2 hypothyroidism symptoms your holistic doctor will look for:
|Dry skin||Tingling or numbness of the extremities|
|Frequent infections||Chronic pain|
|Mouth and throat problems||Endocrine and autoimmune diseases|
|Eye problems||Neurological disorders|
|Kidney, liver, bladder, gallbladder, lung, and heartproblems||Hair loss|
A history of cancer, especially thyroid or endocrine cancers, can indicate type 2 hypothyroidism as well.
Finally, the myxedema skin pinch test can also help your doctor diagnose your hypothyroidism. Simply put, you want to be able to pinch the skin on your arm or other areas of your body. Thickened and swelling skin is another major symptom of hypothyroidism.
How Is Hypothyroidism Treated?
Synthetic thyroid hormones, or T4 (levothyroxine), is considered the conventional hypothyroidism treatment. However, many people on this treatment still experience hypothyroidism symptoms. Desiccated thyroid is also sometimes used, but both medications are considered ineffective when adrenal fatigue is the root cause of the problem. Adrenal fatigue and hypothyroidism should be treated simultaneously.
A natural approach to thyroid and adrenal support should include tyrosine, iodine, selenium, zinc, and vitamins A, C, B2, B6, B12, D, and E. Also, a thyroid glandular supplement will help stimulate thyroid function.
Natural food sources for boosting thyroid health include sea vegetables such as nori, kelp, kombu, and wakame, which are high in iodine and recommended for people with hypothyroidism.
It is also wise to eliminate or at least reduce your exposure to possible hormone disruptors, including synthetic estrogen, heavy metals, halogens, and chemicals.
Sources for Today’s Article:
Murray, M., et al., The Encyclopedia of Natural Medicine (New York: Atria Paperback, 2012), 716–722.
Nutritional Symptomatology Level II (Toronto: Institute of Holistic Nutrition course notes, 2014), 180–186.
Trentini, D., “300+ Hypothyroidism Symptoms…Yes REALLY,” HypothyroidMom web site, November 19, 2012; http://hypothyroidmom.com/300-hypothyroidism-symptoms-yes-really/, last accessed March 4, 2015.
Balch, J., et al., Prescription for Natural Cures: A Self-Care Guide for Treating Health Problems with Natural Remedies Including Diet, Nutrition, Supplements, and Other Holistic Methods (Hoboken: John Wiley & Sons, Inc., 2004), 340–345.
Starr, M., Hypothyroidism Type 2: The Epidemic (Columbia: Mark Starr Trust, 2005), 1, 24–25, 52–53. | <urn:uuid:f93861bc-000c-4f6e-9a14-22698aa09bf1> | CC-MAIN-2020-24 | https://www.doctorshealthpress.com/general-health/diagnose-and-treat-an-unhealthy-thyroid/ | s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347419056.73/warc/CC-MAIN-20200601145025-20200601175025-00543.warc.gz | en | 0.904337 | 1,537 | 2.65625 | 3 |
Mental health and behavioral problem support to help your child and family
Mental and emotional well-being are part of a child's overall health – and something many parents and caregivers have concerns about at some point during a child's life. When these worries arise, it can be difficult to know if a child's mood or behavior is a normal part of growing up or if it's something more serious that may need professional attention.
Even when a family feels certain that their child needs help, knowing how and where to find it can be confusing. We are here to help.
Understanding mental health disorders in children and teens
The following topics can help you better understand child and adolescent mental health and the range of services and supports available to families in Central Virginia.
- Signs of problems and behavioral disorders: When a child’s problems with mood or behavior begin to interfere with relationships or everyday life, it may be a sign of a mental health disorder. We’re here to help you find the signs, diagnose, and treat them.
- Understanding evaluation: It is helpful for families to understand what goes into evaluating your child that might have a mental health and behavioral disorder.
- Know the available treatment: Following an evaluation, a mental health professional will recommend a specific treatment for your child.
Contact our resource center to help your family
The Cameron K. Gallagher Mental Health Resource Center helps families navigate and access mental health services in Virginia. Families can call directly to speak with a family navigator, free of charge. Primary care providers can also call us to refer families in need of support and assistance.
Call us at (804) 828-9897
Learn about the mental health services we have available for families
Recursos en Espanol
Aqui le proveemos una lista de paginas en espanol con una variedad de recursos sobre la salud mental. | <urn:uuid:7948a004-644d-4010-b6e7-da20d7ec3bff> | CC-MAIN-2020-40 | https://www.chrichmond.org/services/mental-health/family-support-resources/family-support-resources | s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600402101163.62/warc/CC-MAIN-20200930013009-20200930043009-00712.warc.gz | en | 0.907314 | 394 | 2.59375 | 3 |
Humboldt-Universität zu Berlin
The Humboldt University of Berlin (German: Humboldt-Universität zu Berlin) is one of Berlin's oldest universities, founded in 1810 as the University of Berlin (Universität zu Berlin) by the liberal Prussian educational reformer and linguist Wilhelm von Humboldt, whose university model has strongly influenced other European and Western universities. From 1828 it was known as the Frederick William University (Friedrich-Wilhelms-Universität), and later (unofficially) also as the Universität unter den Linden after its location. In 1949, it changed its name to Humboldt-Universität in honor of both its founder Wilhelm and his brother, geographer Alexander von Humboldt. In 2012, the Humboldt University of Berlin was one of eleven German top-universities (also known as elite universities) to win in the German Universities Excellence Initiative, a national competition for universities organized by the German Federal Government.
The first semester at the newly founded Berlin university occurred in 1810 with 256 students and 52 lecturers in faculties of law, medicine, theology and philosophy under rector Theodor Schmalz. The university has been home to many of Germany's greatest thinkers of the past two centuries, among them the subjective idealist philosopher Johann Gottlieb Fichte, the theologian Friedrich Schleiermacher, the absolute idealist philosopher G.W.F. Hegel, the Romantic legal theorist Friedrich Carl von Savigny, the pessimist philosopher Arthur Schopenhauer, the objective idealist philosopher Friedrich Schelling, cultural critic Walter Benjamin, and famous physicists Albert Einstein and Max Planck. Founders of Marxist theory Karl Marx and Friedrich Engels attended the university, as did poet Heinrich Heine, novelist Alfred Döblin, founder of structuralism Ferdinand de Saussure, German unifier Otto von Bismarck, Communist Party of Germany founder Karl Liebknecht, African American Pan Africanist W. E. B. Du Bois and European unifier Robert Schuman, as well as the influential surgeon Johann Friedrich Dieffenbach in the early half of the 1800s. The university is home to 29 Nobel Prize winners.
The structure of German research-intensive universities, such as Humboldt, served as a model for institutions like Johns Hopkins University. Further, it has been claimed that "the 'Humboldtian' university became a model for the rest of Europe [...] with its central principal being the union of teaching and research in the work of the individual scholar or scientist."
This school offers programs in: | <urn:uuid:df11368a-190d-4953-b5f1-c258f5a10add> | CC-MAIN-2018-51 | https://www.masterstudies.ca/universities/Germany/HU/ | s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376823705.4/warc/CC-MAIN-20181211215732-20181212001232-00036.warc.gz | en | 0.914412 | 567 | 3.15625 | 3 |
It's the truth, it's actualEver wonder where the characters from the Splash Mountain ride at the Disney Theme Parks come from? Song of the South is a 1946 Disney film that incorporated animation and live action. You haven't heard of it? That's understandable; it has never been released in the US outside the theater, and not released at all since 1986. (This also means the ride is better known than the original film.)The film is, unbeknownst even to the people who have seen it (especially in Europe, where the context is lost), based on a collection of African-American folktales compiled by Joel Chandler Harris in the late 1800s. It is notable that, although the Framing Device is accused of racism today, it was considered pretty Fair for Its Day, being written by a Southerner: Harris was attempting to compile African-American folk tales that had been passed down from the days of slavery before they were lost.The popularity of the book led to the popularity of archetypes such as Br'er Rabbit, the "Briar Patch" and the "Tar Baby" (the meaning of which tropes subsequently were lost to younger viewers after the film was sealed in the Disney vault in the '80s, when the stories themselves became forgotten by later generations unfamiliar with the work) which were taken straight from the original folktales. Some who maintain that the film should not be released note, however, that keeping these tales alive ties in too much with the days of slavery and Reconstruction, a shameful period in American history that they feel children should not be subjected to, at least not in a way that could be perceived as anything but monstrous.Set in the Deep South after The American Civil War, the film features Uncle Remus telling stories of Br'er Rabbit and friends to three kids from his rural cabin. Due to the "impression it gives of an idyllic master-slave relationship" (the film was probably set during Reconstruction, just so that Uncle Remus would not be depicted as a slave — though he almost certainly has been one) it will probably never be released in the US. It was available on VHS tape in the UK (where the associated sensitivities are still present, but further from the surface) throughout the '90s and early '00s, and shown as an afternoon family film on TV. It was also aired a few times on the Disney Channel during the 1980's. A Japanese laserdisc (with an English track version included as a bonus) was also released years ago, and it's become quite a collector's item. As of this writing, Whoopi Goldberg is in talks to get Disney to release it on DVD.You probably do know a song from it, though, that one being "Zip-a-Dee Doo-Dah." (Now just try getting it out of your head.)In some European countries, like the Netherlands and Scandinavia, Br'er Rabbit comics was introduced in the early 50's, and remains popular and are still a regular part of the weekly Disney comics. And while the framing device with Uncle Remus was featured in the first comics, it has since quietly disappeared and faded into obscurity, to the point where only few readers know that it has ever existed. And while the film was released in Europe, it is virtually unknown there.
Ev'rything is satisfactual
Wonderful feelin', wonderful day
Ev'rything is satisfactual
Wonderful feelin', wonderful day
Song of the South provides examples of:
- Accessory-Wearing Cartoon Animal: Mr. Bluebird
- Adults Are Useless: Played straight with Johnny's mother Sally. She's so wrapped up in trying to make him feel better and raise him properly that she doesn't bother listening to anyone else's advice or explanations (not even Johnny's), and unknowingly makes things worse for him as a result. Johnny's father, John Sr., does come around, with gentle urging from Uncle Remus; Sally finally accepts what's going on.
- Barefoot Cartoon Animal: Most of the animal characters, if not all.
- Bears are Bad News: Br'er Bear. But on the other hand...
- Beary Funny: Br'er Bear may be a villain, but a harmless and humorous one.
- Bee Afraid: As part of Br'er Rabbit's "laughing place" scam. Lampshaded by Br'er Bear when he is the first to fall for this and emerges with the beehive on his nose, saying, "There ain't nothin' in here 'cept beeeeeezzzzzzz!" and a swarm of bees comes flying out of his mouth.
- Big Ball of Violence: Br'er Rabbit gets caught in one with Br'er Fox and Br'er Bear at one point.
- Big Fancy House
- Black Best Friend: Toby
- Brains and Brawn: Br'er Fox and Br'er Bear.
- Briar Patching: Not just the story that the trope is named after, but after hearing the story, Johnny pulls this trope on Ginny's brothers.
- Carnivore Confusion: The Hero of the stories is a rabbit, so the fox and the bear are villains.
- Chekhov's Gunman: The bull.
- Comic-Book Adaptation: Eventually, the Br'er Rabbit stories just drop the movie's original frame story altogether. Naturally, it's only those later stories that get reprinted. The characters, most notably Br'er Bear, also make numerous appearances in The Three Little Pigs comics.
- Cool Old Guy: Uncle Remus
- Cunning Like a Fox: Br'er Fox, or so he would think. You just have to go on the ride to see Br'er Bear run into trouble.
- Dark Reprise:
- Both on the ride and in the movie, the song "Laughing Place" gets a dark reprise ("Burrow's Lament"). It has vocals in the dark reprise only in the Disneyland version. In Disney World, there is just an instrumental.
- Br'er Fox singing "How do you do".
- Does This Remind You of Anything?: Both subverted and inverted: The Boondocks is the one asking the question, and the answer is probably "No," because Aaron McGruder's one of the few younger Americans who has seen it. For those that haven't, Song of the South has Uncle Remus, and The Boondocks has Uncle Ruckus. Plenty of Americans have seen it, just not the under-25 crowd. It used to be broadcast occasionally up until The '80s.
- Doomed New Clothes: Poor Ginny Favers... she understandably breaks down in tears.
- Exact Words: During the Laughin' Place scene where Br'er Rabbit tricks his two foesBr'er Bear: You said this was a laughin' place! And I ain't laughin'! [gets attacked by bees]
Br'er Rabbit: [in-between laughing fits] I didn't say it was YOUR laughin' place, I said it was MY laughin' place, Br'er Bear!
- Forbidden Fruit: You know you want to see it... you don't even care about the quality.
- Glad I Thought of It: What Br'er Fox usually says when Br'er Bear comes up with the ideas to catch Br'er Rabbit.
- Gory Discretion Shot: Well, we never actually see Johnny attacked by the bull, do we now?
- Half-Dressed Cartoon Animal: Br'er Bear and the moles.
- The Hyena: Br'er Rabbit during the Laughing Place scene.
- Infant Immortality
- Intergenerational Friendship: Uncle Remus and Johnny and the other children.
- "Just So" Story: Towards the beginning the protagonist comes upon a gathering of black sharecroppers in the shade, singing about Uncle Remus' tales, which tell how the leopard got his spots, how the camel got those humps, and how the pig got a curly tail.
- Karma Houdini: Ginny's brothers. Except when they pushed her into the mud and ruined her dress, Uncle Remus showed up to tell them off for bullying her.
- Karmic Trickster
- Kick the Dog: Ginny's brothers do this when they mistreat her puppy and threaten to drown it — and mean every word.
- Large Ham: All of the Br'er animals, especially Br'er Fox.
- Lean and Mean: Br'er Fox. No wonder he's so hot on that rabbit's trail; Br'er Rabbit'd be the only square meal Br'er Fox's had in a while!
- Lonely Rich Kid: Johnny
- Magical Negro: Uncle Remus. He could also be a subversion, since in the end he actually DOES step forward and save the day.
- Mammy: The most famous mammy of them all, Hattie McDaniel, is in this movie.
- Medium Blending
- Mickey Mousing: As usual for a Disney film — and then played for laughs, when Br'er Bear has trouble keeping up with the background music.
- Motor Mouth: Br'er Fox. The Disney animation directors actually had to invent a new animation process to keep up with James Baskett's rapid-fire delivery of Br'er Fox's dialogue.
- Nice Hat: Mr. Bluebird's top hat.
- Nothing Is Scarier: You see the bull chasing Johnny in the climax, but you never see it strike. It is up to you to imagine the extent of the little boy's injuries...
- Only Known by Their Nickname: In case you were wondering, "Br'er" is just short for "Brother". (And it should actually be pronounced more or less like "bro.") Some of the comics imply that they do have real names, but they are otherwise unmentioned. Joel Chandler Harris gives Riley as Br'er Rabbit's real name. A very few Disney comics mention it now and then.
- Oh, Crap!: Br'er Rabbit gets progressively more and more nervous during his plan to escape via reverse psychology when Br'er Fox keeps ignoring him. He finally gets one big 'Oh Crap' expression when Br'er Fox states that he'll skin him.
- Poor Communication Kills: No-one bothers to tell Johnny’s mother that Johnny got the puppy fair and square or what would happen to the puppy if it were returned to its previous owner, so she chalks up any disobedience on Johnny's part to Uncle Remus' influence.
- Precision F-Strike: In the Mexican Spanish dub, Br'er Rabbit says "maldito" while boarding up his house in his very first appearance in the first animated segment.
- Real After All: All the cartoon characters show up at the end, in the real world, then the kids and Uncle Remus go off into the sunset with them.
- Reverse Psychology: The Briar Patching moment. In the ride, this moment cues when your log crests the belt for the big final drop.
- Roger Rabbit Effect: Not the Ur-Example, as some might tell you — that would be Gertie the Dinosaur — but the first time it was used in the mainstream Disney features (The Alice Comedies and The Three Caballeros notwithstanding).
- Sand In My Eyes: Inverted by Uncle Remus to crying Johnny.
- Simpleton Voice: Br'er Bear.
- Smoking Is Cool: Uncle Remus shares a pipe with Br'er Frog. This overlaps with Fair for Its Day. Back when the film was released, most people smoked, and those who didn't were frowned upon, if not shunned or hated. By showing Uncle Remus smoking on screen, they were attempting to make the audience like him more. More so because it's the only scene in the movie where anyone is seen smoking. Br'er Frog blows a smoke ring. Uncle Remus blows a smoke square. How cool is that?!
- Stating the Simple Solution: Br'er Bear has his doubts about Br'er Fox's scheme to trap Br'er Rabbit in the Tar Baby being a success, and usually prefers the quickest, simplest, probably most effective way.Br'er Fox: That big ol' rabbit won't get away this time. No sir, we'll catch him, sure! I'll catch him, sure!
Br'er Bear: But, uh, that's what you said the last time before, and the time before that, and the... Look, let's just knock his head clean off.
Br'er Fox: Oh, no, indeed, ain't nothing smart about that. I'm gonna show him who the smartest is, and that Tar Baby'll do the rest!
[and once they've caught Br'er Rabbit]
Br'er Bear: I'm gonna knock his head clean off!
Br'er Fox: No, no, no, that's too quick! We gotta make him suffer!
- Sticky Situation: Trope name for "Tar Baby", an expression which at least traditionally refers to this.
- Stock Beehive: Br'er Rabbit finds a grey one hidden in a bush. He tricks Br'er Bear inside saying that it's his "laughing place". Bear gets the hive stuck on his nose.
- Those Two Bad Guys: Br'er Fox and Br'er Bear, obviously. Also, Ginny's brothers.
- ¡Three Amigos!: Johnny, Toby, and Ginny.
- Three Shorts: Three gorgeously animated sequences.
- The Trickster: Br'er Rabbit
- Villainous B.S.O.D.: Br'er Fox has one at the end of the "Tar Baby" sequence: a sickly look on his face after Br'er Rabbit tricked him and hopped off. Br'er Bear silently clubs the fox on the head, knocking him out, then walks off, leaving the fox lying there.Remus: [narrating] So now it's Br'er Fox's turn to feel humble-come-tumble. But ol' Br'er Bear, he don't say nothin'. And Br'er Fox, he lay low — mighty low.
- Villainous Glutton: Br'er Bear, though Br'er Fox is the one with the most pointed culinary interest in Br'er Rabbit.
- Why Don't You Just Shoot Him?: Br'er Bear, dumb as he is, is typically opposed to Br'er Fox's overly complicated schemes to catch Br'er Rabbit and constantly voices his preference to just "knock his head clean off". | <urn:uuid:2e533ec7-542f-4cbb-8757-9b37fbb8d8e4> | CC-MAIN-2017-43 | http://tvtropes.org/pmwiki/pmwiki.php/Film/SongOfTheSouth | s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187825264.94/warc/CC-MAIN-20171022132026-20171022152026-00700.warc.gz | en | 0.962291 | 3,075 | 2.796875 | 3 |
In sewing, strings of various thickness and stiffness are used for piping, a type of trim inserted into a seam to define the edges or style lines of a garment or bag.
A team of researcher at LG Chem seems to add another function to the piping: they developed a Li-Ion battery in string form just a few millimeter thick, is bendable and can even be knotted without compromising it’s battery function.
The energy storage technology is based on Li-Ion chemistry just like conventional Li-Po batteries only twisted into a round, fine string instead in a flat, geometrical form factor.
Thin strands of nickel and tin coated copper wires form the anode. The researcher then spin the coated copper wires into metal yarn, wrap it around a rod to form a spring shape which functions as structural element of the string battery as well as the anode. | <urn:uuid:0d20f40a-093b-4c65-9625-0e394b1d8070> | CC-MAIN-2018-13 | https://blog.adafruit.com/2012/09/06/long-flexible-battery/ | s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257648226.72/warc/CC-MAIN-20180323122312-20180323142312-00668.warc.gz | en | 0.915316 | 181 | 2.6875 | 3 |
October 13, 1955
The U.S. Army officially renamed Camp Rucker to Fort Rucker after declaring it a permanent installation and relocating the Army Aviation School from Oklahoma onto the base. Founded as an infantry training camp in Dale County during World War II, Camp Rucker served as the training site of four infantry divisions and later housed German and Italian prisoners of war until the war’s end. Today, the 64,000-acre base is the state’s largest military installation with a daytime population of 19,000. The fort is the primary flight training base for Army Aviation and is home to the Army Aviation Museum, which houses the Army Aviation Hall of Fame.
Read more at Encyclopedia of Alabama.
The headquarters at Camp Rucker in Coffee and Dale counties, ca. 1940s. (From Encyclopedia of Alabama, courtesy of US Army Aviation Museum)
Edmund W. Rucker (1835-1924) served the Confederate Army under Gen. Nathan B. Forrest during the Civil War and later became a successful Birmingham industrialist. Fort Rucker in Coffee and Dale counties is named in his honor. (From Encyclopedia of Alabama, courtesy of Birmingham Public Library Archives)
Fort Rucker is the U.S. Army’s combat aviation center, as well as the helicopter maintenance facility and pilot training ground for all branches of service. (From Encyclopedia of Alabama, courtesy of The Birmingham News)
Students piloting UH-1H Iroquois helicopters practice formation-flight patterns at Fort Rucker in south Alabama in December 2009. (From Encyclopedia of Alabama, courtesy of the U.S. Air Force. Photograph by Airman 1st Class Anthony Jennings)
For more on Alabama’s Bicentennial, visit Alabama 200. | <urn:uuid:7aaa4f6d-bbb2-4106-8d0a-6b79628dd868> | CC-MAIN-2020-29 | https://alabamanewscenter.com/2017/10/13/day-alabama-history-army-renamed-camp-rucker-fort-rucker/ | s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655912255.54/warc/CC-MAIN-20200710210528-20200711000528-00554.warc.gz | en | 0.938197 | 361 | 3.140625 | 3 |
This course sensitizes regarding privacy and data protection in Big Data environments. You will discover privacy preserving methodologies, as well as data protection regulations and concepts in your Big Data system. By the end of the course, you will be ready to plan your next Big Data project successfully, ensuring that all privacy and data protection related issues are under control. You will look at decent-sized big data projects with privacy-skilled eyes, being able to recognize dangers. This will allow you to improve your systems to a grown and sustainable level. If you are an ICT professional or someone who designs and manages systems in big data environments, this course is for you! Knowledge about Big Data and IT is advantageous, but if you are e.g. a product manager just touching the surface of Big Data and privacy, this course will suit you as well. | <urn:uuid:cd6f58b7-0d2f-4aa9-9117-7eae4a6df1f9> | CC-MAIN-2021-10 | https://pt.coursera.org/lecture/security-privacy-big-data-protection/social-costs-of-big-data-pWIYJ | s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178361723.15/warc/CC-MAIN-20210228175250-20210228205250-00440.warc.gz | en | 0.93634 | 170 | 2.515625 | 3 |
Write-up by Govindam Company College
MEDIA Organizing – Organization – Revenue
Research by Writer, Title or Content
Article ContentAuthor NameArticle Title
Submit Articles or blog posts
Speak to Us
MEDIA Organizing AND Assessing MEDIA.Media program decides how advertising time and space in different media will be utilised to achieve the advertising and advertising goal of the company. The fundamental objective of media prepare is to find out that mixture of media which permits the advertiser to talk the advert-message in the most helpful method at lowest cost in communicating with the target viewers. An advertising and marketing plan is decided by the advertiser to attain advertising and marketing goals. Advertising and marketing objective are made the decision retaining in see the advertising and marketing goals of the organization.In media strategy following primary media selection are taken:-
one) Selecting ideal media to serve the advertiser’s need i.e selecting media whichcan reach the target audience of advertiser.two) Picking greatest mixture or mix of media which is in the advertiser’s advert-finances.three) Choosing most appropriate media schedules.
In Short media arranging consists of the solution to following five W’s:one) Whom : Whom do we want to get to? i.e figuring out goal viewers(possible consumers).2) Wherever : Where are likely buyers positioned? i.e identifying geographical area.three) What : What variety of message is to be selected for communicated? i.e the message isinformative or demonstrative in naturel.4) Which: Which media is to be chosen for communicating with our target viewers?i.eIdentifying suited media-combine.5) When :-When advertisement is to be issued? i.e choosing month,day time of advertisement.
Steps Involved in media arranging:
one)To know about goal marketplace.2)To Contemplate numerous Factors Affecting Media Preparing.•Internal factors• Exterior factors3) To identify the geo graphical Location.four) Creating Media -Goals.5) Picking The best possible media-mix.six) Choosing Appropriate media Vehicle inside each chosen.seven) Media Scheduling.8) Execution of marketing Programme.nine) Adhere to-up and Evaluation.
Aspects impacting media Arranging
(1) Naturel of Solution
Solution to be marketed can be industrial/technical merchandise or buyer product: Industrial/Technical items can greater be marketed in their precise trade-journals/publications. Buyer products can far better be marketed via mass media this kind of as :Television ,Newspaper ,Outside Advertising and many others.Likewise, products for export can be advertised in these kinds of magazines which have circulation on other countries like- ‘products from India’, ‘Product finder’. Stylish goods can be marketed in fashion magazines like-Film fareFeminaStardust etc.
(2) Naturel of Clients:
An suitable media prepare need to think about the type or class of buyers, for whom advertising is to be done. Diverse buyers vary in their age-team, intercourse cash flow, persona, educational amount, frame of mind. On the foundation of consumer characteristics, client teams can be:guys,girls,children,youngOldProfessionalBusinessmenHigh cash flow groupMiddle-cash flow groupLow-revenue groupLiterateIlliterate and so on.(a) Age: For marketing for kids-items, tv is the greatest media for communicating message Even in T.V., and can be provided in carton connected T.V. channels. IF goal viewers is younger then tv, publications are appropriate. If target viewers is composed of previous-age-team, then newspaper, television, is a great selection.(b) Stage of Schooling: If target-viewers are very educated, then advertisement ought to be provided in magazines, national newspapers, world wide web, television. If goal audience are significantly less educated, then nearby newspaper printed in neighborhood languages, low-profile magazines, T.V.are ideal. If viewers are illiterate, then print-media is not suitable. Listed here broadcast media is a excellent choice.
(c) Range of Buyers If amount of target customers is much more, then mass-media like tv, newspaper will be thought of. If number of goal consumers is les, then direct mail-media, tele-promoting are appropriate.
(three) Traits of Distribution Channels:Distribution channels can be categorised on the basis of geographical distribution of merchandise or providers of advertiser. Distribution channels may be labeled as neighborhood distributor, regional distributors, nationwide distributors, worldwide distributors. a) Neighborhood Distributor: If the merchandise is to be dispersed regionally or regionally, then media with regional coverage and attain must be thought of like regional newspaper, cable-network etc.b) Countrywide Distributors: If solution is dispersed on nationwide amount, then media with nationwide protection like countrywide dailies (newspaper), nationwide-degree-T.V. channels will be ideal.do) Global Distributors: If the merchandise is to be bought at worldwide amount, then media possessing achieve and circulation in foreign nations will be efficient e.g. Web, Publications with circulation in international international locations, T.V. channels acquiring global coverage like B.B.C really should be deemed. If range of sellers is less, then direct-mail-media can also be chosen.
(4) Advertising and marketing Objectives:The significant goal of every advertising and marketing campaign is to get favorable response from buyer, but the precise aims can be different. If the aim of promoting compaign is to get fast final results, then rapidly media of conversation like newspapers will be thought of. If the goal of marketing is to develop corporate-goodwill, and bran-equity, then publications, television will be thought of.
(5) Nature of Message :If promoting message is educational in naturel, then newspaper will be appropriate. If ad-message is to persuade consumers, then they need to be given emotional- charm, rational-charm, demonstration of merchandise, then television media will be considered for advertising and marketing. For example, if ad-message is to notify the likely clients of sale- marketing schemes, lower price-provides, exchange-offer you, competition-gives, then it can be marketed via posters, banners, newspaper-inserts and newspaper. If ad-message is to notify and persuade for the new item released by advertiser, it can be advertised in tv, newspaper. By means of T.V., advertiser can exhibit the new merchandise, display its makes use of, compare it with current items and create the want for the new item.
(six) Dimension of Advert–BudgetIf amount of advertisement finances is a lot more, then expensive media like tv, national dailies, common magazines can be chosen. If sum of Advertisement-finances isless, then media like posters, banners, cable-network, local newspaper, pamphlets will be appropriate.
(seven) Media Utilized by Competition :While organizing for media the advertiser should contemplate the media selected by competition and leaders of that sector. If advertiser does-not contemplate opponents and leaders of that business. If advertiser does-not contemplate competitors transfer regarding media, then it is feasible that advertiser’s industry share is taken by opponents. If industry-leader is employing T.V. as media, then the advertiser, way too, have to think about the identical media. The advertiser must have a near observe on media-tactics, advertisement-price range of rivals. However, competitor’s tactics ought to not be adopted blindly, as it is feasible that choice of competitor is wrong.
(eight) Media- AvailabilitySometimes it is attainable that wanted house for ads, in print media is previously booked by some other advertiser. For instance, advertiser needs to problem an advertisement on entrance page of newspaper or on the cover-webpage of any journal, butthis house is previously booked by some other advertises, then this media is not available to the advertiser, and he will prepare for some other media or he will have to change timing of ad. Equally if an ad is to be issued on tv throughout a certain programme, then it is possible that advertising time is not offered on that programme, as it may have been booked / sponsored by other advertisers. SO, media-availability have to be considered for media arranging.
(9) Media Achieve and Protection:This sort of media should be chosen which has broad achieve and can go over our goal consumers. If the advertiser has two accessible media, involving identical value, then media with a lot more achieve and coverage of our target viewers will be selected. Mediareach implies complete circulation/viewership of media in a provided period of time of time, will be known as its reach for every day. If advertisement is provided on T.V., then anticipated viewers dimension of that Tv programme in which advert is issued, in a provided period of time is known as its reach. If measurers the number of folks who are exposed at minimum when to this media in a particular period of time. Media protection refers to the potential audience who may acquire the message provided by media. Greater media attain will guarantee higher media coverage, if the media matches with the features of our goal viewers. So chosen media must match with out target audience.
(10) Media Expense:Advertiser should examine the price of every single media by contemplating the quantity of viewers lined by these kinds of media. It is feasible that a media would seem to be high priced, but if it can go over huge quantity of audience, then expense per audience will be less. In case, advertisement is to be offered in newspapers, then expense of distinct newspaper is computed on the foundation of value for every a single lac of its circulation.
(11) Media- Frequency
Media-frequency refers to typical quantity of instances the viewers is exposed to media-automobile in a specified time period of time. Higher media-frequency is preferred. Better the frequency, much more are the odds of advertisement message making deep impression on the minds of buyers. In scenario of print media, frequency of newspaper isvery significantly less as the receiver is not exposed to the exact same newspaper for a long period of time of time. On the subsequent day he will be finding the new newspaper and old newspaper will be discarded the exact same day. In case of journal, media-frequency is more as very same magazine might be opened by viewers could a time, as the journal will be repeated after a month or right after a fortnight. In case of tv, if an ad is provided in a weekly T.V. programme, and if it is presented once in each and every episode, and say advertisement is presented for 4 episodes, then listed here frequency implies number of times, the audience are exposed to this ad in 4 week duration time period. Higherfrequency will produce better impressions on goal audience. So media with greater frequency really should be picked.
Some media-picture autos take pleasure in far better image in comparison to other media cars. Media-picture improves the conversation worth of ad. Goodreputation of editorial board, well founded media, appreciate much better picture amid general public, so adverts given in these kinds of media increase the credibility and rely on of commercials. So media with very good image really should be picked.
About the Author
Govindam Business Faculty gives you an unparallel chance to examine at advance level, to function with in a difficult, stimulating and rewarding atmosphere, to create skills and competencies which will last throughout schools life.
Use and distribution of this post is matter to our Publisher Tips
whereby the authentic author’s data and copyright have to be incorporated.
Govindam Company University
Include to Favorites
Get in touch with Us
GoArticles.com © 2012, All Rights Reserved. | <urn:uuid:694e4213-490c-471d-89fd-7f57acdde34a> | CC-MAIN-2015-06 | http://ayskreme.com/category/media/ | s3://commoncrawl/crawl-data/CC-MAIN-2015-06/segments/1422115900160.86/warc/CC-MAIN-20150124161140-00210-ip-10-180-212-252.ec2.internal.warc.gz | en | 0.917333 | 2,494 | 2.640625 | 3 |
Oct. 10, 2002 WASHINGTON - Both air pollution and global warming could be reduced by controlling emissions of methane gas, according to a new study by scientists at Harvard University, the Argonne National Laboratory, and the Environmental Protection Agency. The reason, they say, is that methane is directly linked to the production of ozone in the troposphere, the lowest part of Earth's atmosphere, extending from the surface to around 12 kilometers [7 miles] altitude. Ozone is the primary constituent of smog and both methane and ozone are significant greenhouse gases.
A simulation based upon emissions projections by the Intergovernmental Panel on Climate Change (IPCC) predicts a longer and more intense ozone season in the United States by 2030, despite domestic emission reductions, the researchers note. Mitigation should therefore be considered on a global scale, the researchers say, and must take into account a rising global background level of ozone. Currently, the U.S. standard is based upon 84 parts per billion by volume of ozone, not to be exceeded more than three times per year, a standard that is not currently met nationwide. In Europe, the standard is much stricter, 55-65 parts of ozone per billion by volume, but these targets are also exceeded in many European countries.
Writing this month in the journal Geophysical Research Letters, Arlene M. Fiore and her colleagues say that one way to simultaneously decrease ozone pollution and greenhouse warming is to reduce methane emissions. Ozone is formed in the troposphere by chemical reactions involving methane, other organic compounds, and carbon monoxide, in the presence of nitrogen oxides and sunlight. Methane is known to be a major source of ozone throughout the troposphere, but is not usually considered to play a key role in the production of ozone smog in surface air, because of its long lifetime.
Sources of manmade methane include, notably, herds of cattle and other ungulates, rice production, and leaks of natural gas from pipelines, according to the IPCC. In addition, natural sources of methane include wetlands, termites, oceans, and gas hydrate nodules on the sea floor.
In a baseline study in 1995, 60 percent of methane emissions to the atmosphere were the result of human activity. The IPCC's A1 scenario, which Fiore characterizes as "less optimistic in terms of anticipated emissions than a companion B1 scenario," posits economic development as the primary policy influencing future trends of manmade emissions in most countries. Under A1, emissions would increase globally from 1995 to 2030, but their distribution would shift. Manmade nitrogen oxides would decline by 10 percent in the developed world, but increase by 130 percent in developing countries. During the same period, methane emissions would increase by 43 percent globally, according to the A1 scenario.
The researchers find that a reduction of manmade methane by 50 percent would have a greater impact on global tropospheric ozone than a comparable reduction in manmade nitrogen oxide emissions. Reducing surface nitrogen oxide emissions does effectively improve air quality by decreasing surface ozone levels, but this impact tends to be localized, and does not yield much benefit in terms of greenhouse warming. Reductions in methane emissions would, however, help to decrease greenhouse warming by decreasing both methane and ozone in the atmosphere world-wide, and this would also help to reduce surface air pollution.
Both in the United States and Europe, aggressive programs of emission controls aimed at lowering ozone-based pollution may be offset by rising emissions of methane and nitrogen oxides from developing countries, the researchers write. Pollution could therefore increase, despite these controls, and the summertime pollution season would actually lengthen, according to the simulation under the A1 scenario.
The study was funded by the Environmental Protection Agency (EPA), National Aeronautics and Space Administration (NASA), and the National Science Foundation (NSF).
Other social bookmarking and sharing tools:
Note: Materials may be edited for content and length. For further information, please contact the source cited above.
Note: If no author is given, the source is cited instead. | <urn:uuid:854bdd77-5c7a-4bc8-b77b-9caa48b5721d> | CC-MAIN-2013-20 | http://www.sciencedaily.com/releases/2002/10/021010065923.htm | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706964363/warc/CC-MAIN-20130516122244-00023-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.942969 | 825 | 3.859375 | 4 |
Dictionary.com has the following definition of scientism:
"The belief that the assumptions, methods of research, etc., of the physical and biological sciences are equally appropriate and essential to all other disciplines, including the humanities and the social sciences."
I think this is a weak definition. Those with a scientistic mentality don't always think that the methods of the hard sciences are appropriate to the humanities. What they think is that the methods of the hard sciences are the only methods that can result in knowledge. Their conclusion is not that scientific methods are appropriate to the humanities, but that the humanities don't issue in knowledge because they cannot be pursued according to scientific methods.
I prefer the following definition of scientism, which may not be original with me, although I have not come across it in quite this formulation:
"Scientism is the mistake of taking the results of science to be more firmly known than its prerequisites."
It is, in other words, to think physics is more certain than metaphysics. It is to be confident that the atoms, electrons, quarks and black holes that result from scientific inquiry are "really real", but be suspicious of the microscopes, telescopes and centrifuges the scientist uses to deduce those electrons and black holes. This suspicion may even extend to the mind of the scientist who conducted the science.
Another way of saying it is that the scientistic mindset finds the everyday world of common experience to be more metaphysically suspect than the world constructed by scientific inquiry. It is to be more confident of the reality of bosons and protons than it is of cars, trees or the wind. The self-contradiction of scientism is that the possibility of science depends on the reality of the world of common experience; if the world of common experience is suspect, then the science that occurs in it is at least as suspect. And it is metaphysics that explores and defends the world of common experience.
At the origin of modern science, Galileo constructed a telescope and looked through it to discover the moons of Jupiter. Galileo's scientific discovery was only possible because he was here, the moons of Jupiter were there, and he was able to look from here to there through the telescope. Galileo did not discover the distinction between here and there; he brought the distinction into science and it is that distinction (among other things) that made his science possible. His science is a science of reality only because the distinction between here and there is a distinction not merely in our minds, but in reality as well. If the distinction between here and there is not real, or is just a fantasy of our minds, then the science conducted in light of it is a fantasy as well. In fact, Galileo's science is a science of reality only to the extent that the metaphysics supporting it is a metaphysics of reality.
The reader may recognize Immanuel Kant lurking in that last paragraph, and Kant is the great philosopher of modernity because he understood the meaning of the presumptions of modern thought and refused to turn from their consequences. He did not try to have his cake and eat it too, as so many modern thinkers do.
The scientistic mindset doesn't get this, and, unlike Kant, tends to think that science can produce the metaphysics that would underwrite its own possibility. This is endemic to contemporary mind science. It is amazing how many mind investigators quote Kant yet how little he is understood. Mind investigators find the metaphysical status of the mind to be dubious and mysterious, but have great confidence in the metaphysical ground of the scientific conclusions this shadowy mind draws. It's as though they think a fictional scientist in a movie can draw real conclusions about the size of the theater in which the movie is shown. Alas, a fictional scientist can only conduct a fictional science... and a science of reality must start in reality, and to do that it needs a metaphysics of reality. | <urn:uuid:d3275516-56a3-463d-9cf4-779e39b18ef3> | CC-MAIN-2017-22 | http://lifesprivatebook.blogspot.com/2010/02/definition-of-scientism.html | s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463607702.58/warc/CC-MAIN-20170523202350-20170523222350-00103.warc.gz | en | 0.96949 | 792 | 2.703125 | 3 |
A fierce ocean predator became prey once its carcass washed onto a spit at the mouth of the Klamath River on Tuesday.
The shark was believed to have washed ashore between dusk Monday and dawn Tuesday.
Scientists from the Department of Fish and Wildlife said it appeared to be a great white shark about 10 feet long, based on a photo provided to them.
Tuesday morning it still had most of its body parts sans a left fin and jaw, but by Tuesday afternoon it looked like a meat slab from afar.
Its tail and fins were missing and it appeared to be decapitated.
Waves crashed over the carcass as seagulls descended upon its sliced open body.
One man reckoned humans gutted it; he saw at least one person cut off the tail, then hold it up in celebration. The tail was large, extending past the shoulder of the man who detached it, he said.
Removing body parts from great white sharks is illegal, because they are protected by the California Endangered Species Act.
“All take and possession of white sharks and their parts is not legal,” said DFW Marine Communications Coordinator Carrie Wilson in an e-mail.
Some fishermen who were lined along the mouth’s banks sport-fishing for this year’s bounty of salmon said they had noticed the shark swimming near the shore for about a week.
The salmon have attracted fishermen, seagulls and sea lions, which are crowding around the mouth. It’s the sea lions that likely brought the shark, said Ed Roberts, an environmental scientist for the DFW.
“White sharks primarily eat mammals,” said Roberts. “Odds are it’s probably prey” that brought the shark so close to shore.
The shark is considered to be sub-adult, so the cause of death wouldn’t be age, Roberts said, add it’s unclear what led to its demise.
Considering its size and how close it is to the ocean, the shark will likely not be disposed of by any agency. Instead, it’ll probably remain until the tide swallows it back. | <urn:uuid:f1b4fbaf-ed69-4935-a4af-da67ea001e04> | CC-MAIN-2014-49 | http://www.triplicate.com/News/Local-News/10-foot-shark-washes-ashore | s3://commoncrawl/crawl-data/CC-MAIN-2014-49/segments/1416405325961.18/warc/CC-MAIN-20141119135525-00202-ip-10-235-23-156.ec2.internal.warc.gz | en | 0.967108 | 446 | 2.515625 | 3 |
Foundations and Frameworks
At Kingsway Christian School we are committed to establishing a strong foundation for reading. We do this through the ABeka Reading Program which emphasizes the skills of phonemic awareness, phonics, and word fluency in the early elementary grades.
As students develop their reading skills fluency, comprehension and vocabulary play a larger role in reading. For this reason, our desire is to teach students to read with understanding so that they comprehend the author’s intended message and thus apply that knowledge to God’s word.
To aid in this process, we have selected the Foundations & Frameworks program, which was developed at Briarwood Christian School in Birmingham, Alabama.
Here are a few highlights of the program:
- Each classroom teacher participates in 2 week training and is certified to use Foundations & Frameworks (F&F) in his or her classroom.
- Each unit focuses on one main reading comprehension skill (i.e. Sequence of Events) and takes approximately 3 weeks to complete.
- Each unit uses three books with differing levels of difficulty so each child is learning to use the same skill at a level that meets their needs.
- Each child has some choice in the book chosen as they get to “vote” for their favorite book to read for each instructional unit. These books are from among the best in children's literature and are specifically chosen to target a specific reading comprehension skill.
- One of the most important parts of the program is that teacher’s model and constantly teach that skill. The teacher and students use a visual tool to help the students demonstrate their “thinking” as they work through their books. “When a student fails to think while he is reading, he is not truly reading.” (F&F creators)
- Children read a section of their book for their daily reading assignment. It is here that they interact with the books in a reading log as they use a visual tool to show their understanding of what they are reading.
- Children come daily to discussion groups where they interact with the teacher and other children about their books. Teachers are able to evaluate their understanding in a personal and hands-on way helping each student best reach their potential.
- At the end of the unit, students synthesize the content of the book in a group project and present their understanding to the rest of the class. (Intellectual Art)
- Finally students are tested on material not merely focusing on the content of their book, but on their use of the thinking skill for comprehension they have been developing.
It is our desire for our students to become critical, Biblically focused thinkers. For this reason we believe Foundations & Frameworks will be one of our greatest tools for teaching comprehension. As the writers of the program state best, “Teaching children to read is a serious responsibility.” May the Lord bless Kingsway’s desire for equipping students through the F&F program. | <urn:uuid:c5be9e69-3b83-434c-b7c7-f96deeef0eb8> | CC-MAIN-2014-41 | http://www.kingswayschool.org/academics/foundations-and-frameworks/ | s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657135558.82/warc/CC-MAIN-20140914011215-00058-ip-10-234-18-248.ec2.internal.warc.gz | en | 0.963046 | 613 | 3.515625 | 4 |
Shopping from United States? Please visit the correct site for your location.
Standards Based Map Activities
This product is not currently available.
To help you find what you're looking for, see similar items below.
This product has not been rated yet.
0 reviews (Add a review)
Map and geography skills are easy to teach with this engaging, classroom-tested resource. Skills covered include cardinal and intermediate directions, map grids, scales, map keys, and more. Includes bright, vocabulary-building landform poster.
This product has not been reviewed yet. | <urn:uuid:db30b2ea-e275-40ce-898c-0546ec6371fc> | CC-MAIN-2018-30 | https://eu-shop.scholastic.co.uk/products/Standards-Based-Map-Activities-9780439517744 | s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676592875.98/warc/CC-MAIN-20180722002753-20180722022753-00314.warc.gz | en | 0.913772 | 118 | 2.765625 | 3 |
ORIGINAL RESEARCH article
Marine Heatwaves Exceed Cardiac Thermal Limits of Adult Sparid Fish (Diplodus capensis, Smith 1884)
- 1South African Institute for Aquatic Biodiversity, Makhanda, South Africa
- 2Department of Ichthyology and Fisheries Science, Rhodes University, Makhanda, South Africa
- 3Department of Zoology and Entomology, Rhodes University, Makhanda, South Africa
- 4Centro de Ciências do Mar (CCMAR), Universidade do Algarve, Faro, Portugal
- 5Department of Geological Sciences, Stanford University, Stanford, CA, United States
- 6Hopkins Marine Station, Stanford University, Pacific Grove, CA, United States
- 7School of Life and Environmental Sciences, Deakin University, Geelong, VIC, Australia
Climate change not only drives increases in global mean ocean temperatures, but also in the intensity and duration of marine heatwaves (MHWs), with potentially deleterious effects on local fishes. A first step to assess the vulnerability of fishes to MHWs is to quantify their upper thermal thresholds and contrast these limits against current and future ocean temperatures during such heating events. Heart failure is considered a primary mechanism governing the upper thermal limits of fishes and begins to occur at temperatures where heart rate fails to keep pace with thermal dependency of reaction rates. This point is identified by estimating the Arrhenius breakpoint temperature (TAB), which is the temperature where maximum heart rate (fHmax) first deviates from its exponential increase with temperature and the incremental Q10 breakpoint temperature (TQB), which is where the Q10 temperature coefficient (relative change in heart rate for a 10°C increase in temperature) for fHmax abruptly decreases during acute warming. Here we determined TAB, TQB and the temperature that causes cardiac arrhythmia (TARR) in adults of the marine sparid, Diplodus capensis, using an established technique. Using these thermal indices results, we further estimated adult D. capensis vulnerability to contemporary MHWs and increases in ocean temperatures along the warm-temperate south-east coast of South Africa. For the established technique, we stimulated fHmax with atropine and isoproterenol and used internal heart rate loggers to measure fHmax under conditions of acute warming in the laboratory. We estimated average TAB, TQB, and TARR values of 20.8°C, 21.0°C, and 28.3°C. These findings indicate that the physiology of D. capensis will be progressively compromised when temperatures exceed 21.0°C up to a thermal end-point of 28.3°C. Recent MHWs along the warm-temperate south-east coast, furthermore, are already occurring within the TARR threshold (26.6–30.0°C) for cardiac function in adult D. capensis, suggesting that this species may already be physiologically compromised by MHWs. Predicted increases in mean ocean temperatures of a conservative 2.0°C, may further result in adult D. capensis experiencing more frequent MHWs as well as a contraction of the northern range limit of this species as mean summer temperatures exceed the average TARR of 28.3°C.
Rising ocean temperatures and the concurrent increase in anomalous thermal events (e.g., marine heatwaves—MHWs) (IPCC, 2014) can exceed physiological thresholds of marine organisms, compromising energetic processes (e.g., growth, reproduction, and behavior) and ultimately influencing their fitness and survival (Doney et al., 2012; Huey et al., 2012; Lefevre, 2016; Abram et al., 2017). These individual-level effects can scale-up to deleterious levels for populations and communities, with knock-on effects on the functioning of ecosystems (Pörtner and Peck, 2010). Predictions of marine species responses to temperature variability in future climate change scenarios requires an understanding of their physiological processes and limits (Sinclair et al., 2016).
One approach to estimate thermal tolerance and optima of fishes is through quantifying heart rate (fH) responses to thermal variability because increasing heart rate is a primary mechanism to fuel increased oxygen demand as temperature rises (e.g., Cooke et al., 2010; Casselman et al., 2012; Anttila et al., 2014; Drost et al., 2014; Ferreira et al., 2014; Sidhu et al., 2014; Hansen et al., 2017; Muller et al., 2020; Skeeles et al., 2020). When the maximum heart rate (fHmax) of fish fails to increase in proportion to the increased oxygen demand as temperatures rise, mismatches between oxygen supply and demand can arise and potentially impair energetic processes (Steinhausen et al., 2008; Farrell, 2009; Eliason et al., 2011, 2013) and may be compromised at critical maximum temperatures (Sandblom et al., 2016; Eliason and Anttila, 2017; Ekström et al., 2019; Skeeles et al., 2020). The point where fHmax stops keeping pace with the thermodynamic effects of temperature can be estimated as the Arrhenius breakpoint temperature (TAB) or the temperature where Q10 temperature coefficients decrease abruptly (TQB) and both metrics can be used as estimates of the upper thermal limits for energy homeostasis (Anttila et al., 2014; Ferreira et al., 2014; Sidhu et al., 2014; Skeeles et al., 2020). Further increases in temperature can eventually lead to an arrhythmic heartbeat (TARR) and potentially cardiac arrest, which normally occurs just below the upper critical temperature (TCRIT/CTmax) (Casselman et al., 2012; Anttila et al., 2013, 2014; Ekström et al., 2019; Skeeles et al., 2020). Measurements of fishes heart rate function at a range of temperatures may therefore provide important insights into how ocean warming and increases in extreme warming events, such as MHWs, can affect their energetic functioning (Fey et al., 2015; Ekström et al., 2019; Stillman, 2019).
The Diplodus sargus species group (comprising five species; Fricke et al., 2016) is distributed in the warm-temperate, shallow waters (<20 m depth) of the Mediterranean, North-east Atlantic, South-east Atlantic and Western Indian Oceans and is thought to be vulnerable to ocean warming due to a narrow (15–20°C) reproductive scope (Potts et al., 2014). Although this may be attributed to the stenothermy of the earliest life stages (eggs and larvae), no information exists on the upper thermal tolerance and performance of the larval or adult stages of this species complex (see Kemp, 2009; Madeira et al., 2012, 2013; van der Walt et al., 2021, for upper thermal tolerance of the juvenile stage).
Blacktail seabream Diplodus capensis (Smith, 1844) is endemic to southern Africa, comprising of two disjunct populations, one distributed along the south-eastern coast of southern Africa from Cape Point to southern Mozambique (Mann, 1992; Smith and Heemstra, 2012) and the other distributed from Namibia to southern Angola (Richardson, 2010). Throughout the distribution of D. capensis along South Africa’s coastal zone, changes in ocean temperature patterns are already occurring as a result of global climate change (Potts et al., 2015). The subtropical east coast is warming as a consequence of the strengthening of the Agulhas Current (Rouault et al., 2010). In contrast, temperatures in the warm-temperate south coast are predicted to become increasingly variable due to the strengthening and warming of the Agulhas Current’s water on the Agulhas Bank and an increase in upwelling favorable easterly winds (Maree et al., 2000; Roberts, 2005; Lutjeharms, 2007; Schlegel et al., 2017; Duncan et al., 2019). Discrete prolonged anomalous MHWs are expected to be more intense and longer in duration in this region and are predicted to increase in intensity and frequency over time (Schlegel et al., 2017). Consequently, coastal fishes of this warm-temperate region, including D. capensis will likely experience the greatest thermally mediated physiological impacts in South Africa (Duncan et al., 2019).
The aim of this study was to use an established technique to estimate cardiac indices of thermal tolerance (TAB, TQB, and TARR) for adult D. capensis and to relate these to the contemporary and predicted extreme thermal events. To achieve this, we used micro heart rate loggers to estimate fHmax under conditions of acute warming in a laboratory and used recent in situ coastal water temperatures to examine the likely physiological impact of changing temperatures on the species.
Materials and Methods
Study Species and Sampling Method
For this study, 16 adult D. capensis individuals (mean ± SD: 0.49 ± 0.16 kg, range: 0.23–0.81 kg) were collected in summer (December 2017–January 2018) from the Port Alfred surf zone (33° 36′ 3.71″ S; 26° 54′ 38.49″ E; Figure 1A) along the warm-temperate south-east coast of South Africa using hook and line. After collection, all individuals were transported in a sealed 5,000 L aerated saltwater tank to the NRF-SAIAB Aquatic Ecophysiology Research Platform (AERP) laboratory at the Department of Ichthyology and Fisheries Science, Rhodes University, Makhanda. Individuals were next transferred to an aerated 5,900 L indoor cylindrical saltwater recirculating holding aquaculture system set at 20.0°C, the mean water temperature at which they were collected, and acclimated for a minimum of 36 h prior to the first experiment.
Figure 1. (A) Recent mean February in situ coastal water temperatures for South Africa (from Smit et al., 2013) and the present known distribution of Diplodus capensis (https://portal.obis.org/taxon/273970); and (B) predicted mean February 2100 decadal coastal temperatures [0.2°C × 10 decades (from 2013) = 2°C in 2109; IPCC, 2014] and the predicted distribution of D. capensis based on an average TARR thermal index of 28.3°C. Study region (Port Alfred surf zone and Kenton-on-Sea) in the Eastern Cape indicated by black outlined rectangle.
Individuals remained in this system for the experimental period, which lasted 2 weeks with a photoperiod of 12 h L: 12 h D. Individuals were fed a mixed diet of squid (Loligo reynaudii) and sardine (Sardinops sagax) every other day and starved 36 h prior to the experiments. Salinity was kept constant at 35 ppt, dissolved oxygen was kept at 100% saturation, and pH remained in the range of 7.9–8.4 in accordance with the water quality parameters taken during fish collection. Ammonia, nitrate and nitrite was maintained at <0.25 mg. L–1, < 2 mg L–1; and < 0.1 mg L–1 (Salifert Test Kits). Water quality was monitored every second day. If water salinity, pH, ammonia, nitrate and nitrite were found to be high, a partial water change was conducted, whereby a quarter of the seawater was removed from the holding aquaculture system and replaced with either filtered rainwater (to lower high salinity and pH levels) or fresh seawater (to lower high ammonia, nitrate and nitrite levels).
Estimation of fHmax Indicators
The experimental trials to determine fHmax in adult D. capensis individuals using micro heart rate loggers followed the methodology outlined by Casselman et al. (2012) and Skeeles et al. (2020). Prior to experimental trials, D. capensis were individually placed into an aerated rectangular 250 L tank whereby the water temperature was lowered from 20.0°C to 14.0°C over a 3-h period (2.0°C h–1 decrease) using a Hailea, HS-90A chiller to meet the minimum summer water temperatures recorded for the study area over December 2017 and January 2018 when fish were collected. Each individual was then placed into a 250 L tank connected to an overall 800 L aerated recirculating system that had been dosed with 2-phenoxyethanol (C8H10O2; 0.2 ml. L–1). Once anaesthesia was induced (Summerfelt and Smith, 1990; Mylonas et al., 2005), the individual was immediately weighed to the nearest gram using a AEADAM PGL 3,000 g scale and moved to an operating trough for surgery, where anaesthetic seawater solution from the system was applied continuously over the gills to maintain respiration and anaesthesia.
The surgery entailed implanting leadless heart rate loggers into individuals (DST micro HRT, 8.3 mm × 25.4 mm, 3.3 g, Star-Oddi©, Iceland,1). The loggers were pre-programmed to measure heart rate and internal body temperature every 15 s for a minute (four readings) followed by 1 minute of recording ECG data at 200 Hz. This programmable setting was continuous for a 3-h period. The heart rate logger was attached with two sutures (Clinisut® silk suture; 3--0, South Africa2), one tied to the designated anterior hole and the other to the posterior end via a suture wrap (see Skeeles et al., 2020). A single incision of approximately 2.5 cm was made directly below the origin of the left pectoral fin (Figure 2a). This location allowed the heart rate logger to be situated immediately posterior to the pericardium membrane (Figure 2b). The heart rate logger was inserted into the cavity, with the two circular electrodes orientated sideways (positioned at a 45° angle ventrally and in direct contact with the musculature once sealed; Figure 2b). This orientation has been shown to be optimal for Sparids (Muller et al., 2020). The incision was stitched using the two sutures attached to the heart rate logger and coagulating antiseptic gel was applied externally to the wound.
Figure 2. The position of the incision made on the individual adult sparid Diplodus capensis for the insertion of the Star-Oddi© micro HRT heart rate logger as well as the position of an individual adult D. capensis maintained throughout experiments. The position of the incision (red dashed line) was made directly below the origin of the left pectoral fin (a) relative to the pericardium (red), pelvic and pectoral girdles (gray) within the abdominal cavity for the insertion of the Star-Oddi© micro HRT heart rate logger orientated sideways (b). Sideways orientation was positioned at a 45° angle ventrally and in direct contact with the musculature. The position maintained throughout experiments for individual D. capensis was in an upright position in a weighted sling with a respiratory inflow valve (c).
After surgery, the fish was immediately returned to the rectangular 250 L tank, linked to the aerated 800 L recirculating system with 2-phenoxyethanol and placed in a weighted foam sling that kept the fish suspended in an upright position (Figure 2c). The tank had two inflows, the first of which was a respirator pipe nozzle positioned in the mouth of the fish so that respiration could be maintained and the second, an inflow to regulate water circulation and temperature (Figure 2c). The flow rate of the respirator was kept constant at 1 L min–1. After 1-h, the fish was removed from the sling and intraperitoneally injected with a solution of atropine sulfate (Sigma-Aldrich; 1.2 mg⋅kg−1) to inhibit vagal tonus to the heart, as well as a saline solution of isoproterenol (Sigma-Aldrich; 1.2 μg⋅kg−1) to stimulate cardiac adrenergic β-receptors (Casselman et al., 2012; Chen et al., 2015; Skeeles et al., 2020). A pilot study, to validate the action and efficacy of the drugs atropine sulfate and isoproterenol to induce fHmax and to assess any post-surgery effects on fHmax, was done prior to the experiment by monitoring the heart rate of two anaesthetized individuals (0.62 kg male and 0.47 kg female) at 20.0°C for a further ± 4 h after surgery (Appendix A Supplementary Material).
Five minutes after the intraperitoneal injection, the heart rate logger was set to start recording and the water temperature was raised using an AquaHeat 9.2 kW pump at 6.0°Ch−1 from 14.0 to 30.0°C. Water temperature was raised to 30.0°C as this value was indicative of the maximum summer water temperature in the study area between 2013–2018 (van der Walt et al., 2021) and at a rate of 6.0°Ch−1 to accommodate the maximum time for fHmax drug efficacy (±4 h; Appendix A Supplementary Figure A.1). At the end of the three-hour experiment, the individual fish was removed from the sling, euthanized using a lethal dose of 2-phenoxyethanol (0.5 ml L–1), and the logger retrieved, rinsed, dried and placed in the communication box to retrieve the heart rate data. This experimental protocol was repeated for 14 adult (330–340 mm TL) D. capensis.
Data Processing and Statistical Analysis of fHmax Indicators
Star-Oddi© heart rate loggers return heart rates (beats per minute–BPM) validated using a four-level quality index (QI 0 = great; QI 1 = good; QI 2 = fair; QI 3 = poor) (Muller et al., 2020; Skeeles et al., 2020). The QI of zero was validated with the loggers ECG output to ensure that the algorithm was working successfully and the QI’s were a true representation of the quality of recordings (Skeeles et al., 2020). For the pilot study, data from the heart rate loggers were filtered to accept only values with a QI of zero and the average heart rate was calculated before and after the addition of the intraperitoneal injection (Appendix A Supplementary Figure A.1). For the experimental protocol, data were filtered to accept only values with a QI of zero and binned into 0.25°C increments. A heart rate trial was considered successful if readings with a QI of zero spanned across at least 80% of the 0.25°C temperature increments. Also, if heart rate loggers yielded highly erratic fHmax recordings for an individual, they were excluded from the analysis.
The TAB was calculated using piecewise linear regression models (Quasi-Newton estimation) fitted to the Arrhenius plot [natural logarithm of the heart rate (ln (fHmax)] against the inverse of temperature in Kelvin (1,000 K–1)] (STATISTICA, v. 12, Statsoft). Heart rate data for temperature bins from 14°C to the temperature corresponding to maximum fHmax were used for the TAB analysis (Ferreira et al., 2014). The incremental Q10 of fHmax for individual fish was determined for every 1.0°C increase using the equation outlined by Ferreira et al. (2014):
where fH1 and fH2 are heart rates at first T1 and second T2 temperatures, respectively. The incremental Q10 breakpoint (TQB) was estimated by finding the linear equation of the two consecutive points above and below 2.0 and calculating the temperature at which the two lines intersect (Quasi-Newton estimation, STATISTICA, v. 12, Statsoft). The value of 2.0 was selected as it is regarded as a regular rate of change of routine metabolism with temperature for fish (Drost et al., 2014). The Tmax indicator was observed as the temperature at which fHmax reached its absolute maximum value. The TARR indicator was observed as the temperature at which fHmax first began to decrease rapidly after fHmax plateaued, resulting in assumed cardiac arrhythmia. T-tests were performed to compare TAB and TQB; TAB and TARR; TQB and TARR in RStudio (version 4.0.0). Normality of distributions were tested using a Shapiro-Wilk test for each indice and homoscedasticity was tested using the Levene’s test for each comparison. When normality and homogeneity assumptions were not satisfied, Wilcoxon tests were performed in place of the parametric tests.
Marine Heatwaves and the Thermal Tolerance of Diplodus capensis
Hourly in situ water temperature data (2013–2018) was provided by the South African Environmental Observation Network (SAEON, Elwandle Node, Port Elizabeth, Eastern Cape) for Kenton-on-Sea (mooring Kariega_CTlog_Lower, measuring depth < 2 m), which is situated approximately 30 km from Port Alfred (Figure 1A). Hourly temperatures were averaged to give daily water temperatures upon which MHWs were identified using the heatwave R package 0.4.5 (Schlegel and Smit, 2018) in RStudio (version 4.0.0) to characterize their frequency, intensity, and duration (Appendix B Supplementary Material).
Following Hobday et al. (2016), a MHW was defined as a “discrete prolonged anomalously warm water event.” In this case, “discrete” means an identifiable event with recognizable start and end dates, “prolonged” implies a duration of at least 5 days, and “anomalously warm” temperatures relative to a baseline climatology and threshold (Oliver et al., 2019). The climatological mean and seasonally varying 90th percentile threshold was calculated for each calendar day of the year by pooling all data within an 11-day window across all years (Hobday et al., 2016; Schlegel et al., 2017; Oliver et al., 2019). Marine heatwaves were identified as periods of time when temperatures exceeded the seasonally varying 90th percentile threshold for at least 5 days (Schlegel et al., 2017; Oliver et al., 2019). Furthermore, discrete events with well-defined start and end dates but with “breaks” between events lasting ≤ 2 days followed by subsequent ≥ 5 day events were considered to be continuous events (Hobday et al., 2016; Schlegel et al., 2017).
After the events were defined, a set of metrics were calculated including the duration (time from start to end dates, in days), mean and maximum intensity (measured as anomalies relative to the climatological mean, in °C), and cumulative intensity (the integrated intensity over the duration of the event, analogous to degree-heating-days; °C -days) (Schlegel et al., 2017).
Predicted Marine Heatwaves and Links to the Thermal Tolerance of Diplodus capensis
Predicted future daily water temperatures for the beginning of the next century, in the same study area, were approximated by adding 2.0°C to the recent five-year daily in situ water temperatures. A conservative temperature increase of 2.0°C was used as, globally, SST is predicted to increase by 0.2°C per decade (IPCC, 2014). Marine heatwaves were then identified using the process described above (Appendix C Supplementary Material). This was done in order to estimate how future temperature increases of 2.0°C may shift/increase MHW events into adult D. capensis physiologically preferred threshold (TAB/TQB), above their physiologically preferred threshold, and within and above their physiologically tolerable TARR threshold.
Predicted Impact of Increasing Mean Sea Temperatures on the Distribution of Adult Diplodus capensis
In order to assess the relationship between mean coastal temperatures and the distribution of D. capensis, the current distribution of D. capensis3 was plotted against mean summer (February) in situ temperatures recorded along the South African coastline between 1972 and 2012 (Smit et al., 2013). A uniform rate of temperature increase of 0.2°C per decade for the whole coastline was then used, in line with the global average (IPCC, 2014), and added to the mean summer in situ temperatures to estimate future (2100) SST values. Thermal habitat lost in the future was estimated by removing habitat with mean summer temperatures above the physiologically tolerable TARR of the species.
Twelve of the 14 individuals tested (mean ± SD: 0.51 ± 0.17 kg, range: 0.23–0.81 kg) yielded interpretable results for fHmax and a high heart rate logger efficiency, whereby the desired QI of zero was attained, i.e., ∼ 95% across the 0.25°C temperature increments for each trial (Appendix D Supplementary Figure D.1). For the remaining two individuals (mean ± SD: 0.35 ± 0.05 kg, range: 0.31–0.39 kg), the Star-Oddi© heart rate loggers stopped recording after an hour.
The fHmax of all 12 individuals increased with temperature, peaking at an average of 152 beats min–1 ±17 SD at 28.0 ± 1.7°C (Figure 3A and Table 1). The highest fHmax was generally followed by a plateau and decline in heart rate, which signified the beginning of cardiac arrhythmia (TARR) (Figure 3A). The average TARR was 28.3°C ± 1.7°C SD (Table 1). Piecewise linear regression models for the 12 individuals yielded detectable Arrhenius breakpoint temperatures (TAB) ranging from 19.8 to 23.4°C with an average of 20.8°C ± 1.0°C SD (Figure 3C and Table 1). The incremental Q10 breakpoint (TQB) was similar to the TAB [Wilcoxon W-test; W (22) = 58.5, p = >0.452], ranging from 19.0 to 22.5°C with an average of 21.0°C ± 1.0°C SD (Figure 3B and Table 1). The TARR, however, was significantly different to the TAB [Wilcoxon W-test, W (22) = 0.00, p = <0.01] and TQB [T-test, T (22) = 12.89, p = <0.01].
Figure 3. Maximum heart rate (fHmax) (A), incremental Q10 analysis of fHmax for 1.0°C increments (B), and Arrhenius plots of natural log of maximum heart rate [ln (fHmax)] against the inverse temperature in Kelvin (1,000 K−1) (C) of twelve adult Diplodus capensis in response to increasing water temperature from 14.0°C. The blue vertical line represents the average maximum heart rate (fHmax = 152 beats min–1 at 28.0°C). The red vertical line represents the average arrhythmic temperature (TARR = 28.3°C). The gray horizontal solid line represents the average Q10 breakpoint (Q10 < 2.0). The purple vertical line represents the incremental Q10 breakpoint temperature (TQB = 21.0°C). The green vertical line represents the average Arrhenius breakpoint temperature (TAB = 20.8°C).
Table 1. Biological information, fHmax index values for individual blacktail, Diplodus capensis in response to acute warming.
Analysis of daily water temperatures between 2013 and 2018, showed 16 MHWs (Figure 4A and Table 2). Two MHW events (event 5 and 14) resulted in temperatures above 26.6°C, which is one standard deviation (−1.7°C SD) below adult D. capensis average TARR (28.3°C) thermal index (Figure 3A). Event 14, which occurred between 27 December 2015 and 6 January 2016 (mean intensity = 3.1°C; maximum intensity = 4.0°C above climatological mean) was the most intense summer MHW within this five-year period, when maximum temperatures reached 29.5°C exceeding adult D. capensis average TARR (28.3°C) (Figure 4A and Table 2). Five other MHW events were identified (Figure 4A and Table 2) that may also have physiologically compromised adult D. capensis, with temperatures above the TAB/TQB (22.0–26.6°C) threshold. Within the 20% of the days where daily in situ water temperatures were physiologically preferred for adult D. capensis (average TAB/TQB), three MHW events were identified (Figure 4A and Table 2). Within the 47% of the days where daily in situ water temperatures were low and below adult D. capensis physiologically preferred average TAB/TQB, six MHWs were identified, of which one occurring between 2 August and 19 August 2014 lasted 18 days, reaching maximum temperatures of 23.8°C (Figure 4A and Table 2).
Figure 4. (A) Recent (2013–2018) and (B) predicted next century (2109–2114) daily mean (black line) and maximum (gray line) in situ water temperatures for the warm-temperate south-east coast study area indicating the identification of marine heatwave (MHW) events (red shaded areas) above the seasonally varying 90th percentile threshold (orange line) in relation to adult Diplodus capensis thermal indices (TARR, TAB, and TQB). Blue line represents the climatological mean. Red dashed horizontal line and shaded red rectangle represents the mean arrhythmic temperature and standard deviation (mean TARR ± SD = 28.3°C ± 1.7°C). Green dashed horizontal line and shaded green rectangle represents the mean Arrhenius breakpoint temperature and standard deviation (mean TAB ± SD = 20.8°C ± 1.0°C). Purple dashed horizontal line shaded purple rectangle represents the mean incremental Q10 breakpoint temperature and standard deviation (mean TQB ± SD = 21.0°C ± 1.0°C). Red circled numbers represent the event numbers of the identified MHWs. Predicted daily and maximum in situ water temperatures were calculated from recent daily and maximum in situ water temperatures by adding 2.0°C [0.2°C × 10 decades (from 2013) = 2°C in 2109; IPCC, 2014] resulting in the same MHW events being determined as the current MHW events based on the climatological mean and seasonally varying 90th percentile threshold.
Table 2. The number, duration (time from start to end dates of MHW event, in days), peak (day within duration of MHW event with highest temperature), mean intensity (mean temperature anomaly value relative to the climatological mean during the MHW event, in °C), maximum intensity (highest temperature anomaly value relative to the climatological mean during the MHW event, in °C), and cumulative intensity (sum of daily intensity anomalies over the duration of the MHW event, in °C days) metrics for marine heatwave (MHW) events calculated from recent (2013–2108) and predicted (2109–2114) daily in situ water temperatures for the warm-temperate south-east coast study area.
Likely Climate Change Scenario
A 2.0°C increase in water temperature by the beginning of the next century shifts the number of MHWs into different thermal thresholds (TAB, TQB, and TARR). Four of the MHW events occur within adult D. capensis TARR threshold (26.6–30.0°C), with maximum temperatures reaching 30.1°C, exceeding its average TARR (28.3°C). Six MHW events are above the TAB/TQB threshold (22.0–26.6°C), and four within the physiologically preferred TAB/TQB threshold (19.8–22.0°C) for adult D. capensis (Figure 4B). Only two MHW events are below the physiologically preferred TAB/TQB threshold for adult D. capensis (Figure 4B).
Mean summer in situ water temperatures recorded between 1972 and 2012 throughout the current distribution of D. capensis increase from 11.0°C at Cape Point to 27.0°C in northern KwaZulu-Natal (Figure 1A). Mean summer temperatures currently do not exceed the estimated average TARR thermal index (28.3°C) for adult D. capensis. If mean summer temperatures increase by 2.0°C by the beginning of the next century (2100), they will be above the estimated average TARR thermal index in northern KwaZulu-Natal, possibly resulting in a contraction of the distribution range of the species (Figure 1B).
We found that the occasional, recent summer MHWs already exceeds the thermal limits for cardiac function in adult D. capensis along the south-east coast of South Africa. As the frequency and intensity of these events is predicted to intensify in the future, summer MHWs may be increasingly detrimental to the physiological functioning, performance and overall survival of adult D. capensis. Increases in mean summer water temperatures may also result in a contraction in the overall distribution of this species as mean summer temperatures may exceed the average TARR thermal index in tropical northern KwaZulu-Natal (Figure 1).
The fHmax thermal indices for adult D. capensis were consistent among individuals, with the estimated TARR being 6.0–7.4°C higher than TAB (p = <0.05) and 6.8–8.3°C higher than TQB (p = <0.05), with a 0.8–0.9°C difference between TAB and TQB (p = >0.05). The difference between the TARR and TAB (5.1°C) and between the TAB and TQB (0.84°C) for another South African adult endemic sparid, the red roman Chrysoblephus laticeps (TAB = 18.7–20.1°C, TARR = 23.8–25.2°C; TQB = 17.1–20.7°C) (Skeeles et al., 2020), were smaller than D. capensis and may explain their different distribution patterns. Chrysoblephus laticeps is primarily distributed in the warm-temperate and cool-temperate biogeographical regions of South Africa (Skeeles et al., 2020), whereas D. capensis is primarily distributed in the warm-temperate and subtropical regions. Adult C. laticeps is also found in deeper waters (0–100 m; Götz et al., 2008) than D. capensis (0–40 m; Mann, 1992), where temperatures are likely to be less variable and cooler.
The large difference between TAB and TQB with TARR, suggests that adult D. capensis can withstand relatively short-term water temperature increases which are characteristic of eurythermic species (Sidhu et al., 2014). Even though this window is relatively wide, the impacts of long-term increases in water temperature beyond TAB is uncertain. Since the metabolic Q10 effect of warming on tissue oxygen demand is > 2.0 (i.e., no significant difference between TAB and TQB) however, it is likely that cardiac failure may begin at a temperature well below the temperature that triggers TARR, making the margins functionally narrower than observed (Sidhu et al., 2014). This narrow functional thermal margin may explain why adult D. capensis do not spawn at temperatures higher than 20.0°C (Potts et al., 2014), which further coincides with the mean TAB value of 20.8°C for this study.
We acknowledge that our fHmax thermal indices results and methodology is entirely based on anaesthetized fish (also see Skeeles et al., 2020). Anaesthetics can influence heart rate and, therefore, the accuracy of the TAB thermal index (see Casselman et al., 2012). Anaesthetizing fish, however, can endogenously stimulate fHmax for a given temperature under standardized conditions which is quite difficult when a fish is actively swimming (Skeeles et al., 2020). Nevertheless, future studies should include free-swimming individuals with surgically implanted heart rate loggers not under the influence of anaesthetics within a swim-tunnel exposed to various acute increases in temperature (different heating rates) as a comparison. Swimming activity will induce fHmax and a further cardiac comparison can be made with D. capensis at resting heart rate. This will provide a better understanding of D. capensis real-world cardiac physiological response to contemporary and predicted MHWs and temperature variability. The TARR thermal index, furthermore, in this study was also an estimate and may not be a true representation of the actual TARR (see Gilbert, 2020). This is owing to the difficulties in keeping ECG trace on for the entire duration of the experimental trials as the heart rate loggers battery life will become depleted.
A recent analysis of inshore (in situ) and offshore (optimally interpolated SST—OISST; Reynolds et al., 2007) temperature data spanning a 21-year time series (in situ—40 years, OIST—33 years) indicated that MHWs along the warm-temperate region are more intense and longer in duration than those along the cool-temperate and subtropical regions of South Africa (Schlegel et al., 2017). In this study, within a recent five-year period (2013–2018), 16 MHWs were identified using in situ daily mean water temperatures in the study area, with the hottest MHW attaining 28.4°C, occurring at the same time as the strong El Niño event of 2015/2016 in the northern and tropical Indian Oceans (Gupta et al., 2020), and the longest MHW lasting 18 days (Table 2). These events exceed the maximum mean water temperature, duration and count (25.0°C for 10 days; mean event count of 1.5 ± 1.8 SD) of events recorded within the same region (Schlegel et al., 2017). This, however, could be an artefact of using a longer time series of temperature data over 30 years compared to 5 years for the climatological mean. The average TARR thermal index of 28.3°C was also exceeded during this study. This indicates that adult D. capensis in the study area may already be vulnerable and physiologically impaired as a result of an increase in the intensity and frequency of MHWs in this study area.
In order to mitigate physiological impairments or avoid thermal stress, adult D. capensis may also seek spatial thermal refuge by moving to more favorable conditions (cooler or deeper waters), in their highly heterogeneous thermal environment. Signs of this behavioral thermoregulatory strategy have been demonstrated in other species using acoustic telemetry, where fish move into nearshore shallow tidal creeks on an incoming tide and to deeper cooler waters on the outgoing tide, potentially using these deeper areas as a thermal refuge to avoid extreme warm temperatures (Murchie et al., 2013). Acoustic telemetry studies on adult D. capensis with coded sensor tags and a thermal sensor array would be a useful way to understand if fish actively avoid high temperatures by seeking cooler or deeper waters.
Although the addition of 2.0°C to the five-year daily temperature series provides a rough prediction of future ocean temperatures, it does offer a sense of expected physiological stress across the distribution of adult D. capensis. Based on this prediction, it appears that extreme summer water temperatures (26.8–30.0 °C) may occur 6% more of the days, shifting further into the physiological “danger zone” (mean TARR = 28.3°C), with daily maximum temperatures of up to 30.0°C occurring. These findings suggest that if adult D. capensis average TARR is fixed (hard-upper cardiac limit to thermal tolerance; Morgan et al., 2021), they may not survive this scenario, as water temperatures extend beyond the average TARR. Furthermore, they may not be able to adapt in pace with climate warming, suggesting low potential for evolutionary rescue (Doyle et al., 2011; Klerks et al., 2019; Leeuwis et al., 2021). Adult D. capensis may therefore be living at the edge of their upper thermal cardiac limits, with temperature peaks that exceed physiological limits and could cause hypoxia (Leeuwis et al., 2021) resulting in high mortality (Deutsch et al., 2008; Huey et al., 2012; Genin et al., 2020).
When compared with adults, juvenile D. capensis from the same study region had a wider thermal window and higher mean CTmax end-point of 35.0°C in summer (van der Walt et al., 2021). This suggests that juvenile D. capensis may be more eurythermic and less vulnerable to predicted increases in MHW events compared to their adult counterparts. This may be attributed to the general patterns of increasing thermal sensitivity with body size—larger (older) fish being more thermally sensitive than smaller (younger) fish (Pörtner et al., 2008; Pörtner and Peck, 2010; Dahlke et al., 2020), and may explain why juveniles are able to inhabit highly thermally variable environments such as intertidal pools and estuaries.
In the case of the warm-temperate south-east coast of South Africa, temperature variability is likely to increase (Duncan et al., 2019; van der Walt et al., 2021). An increase in the intensity and frequency of upwelling events, which has contributed to the intensification of temperature variability, has already been recorded along the South African south coast (Duncan et al., 2019). Extreme variability in temperatures is often lethal to fish. A recent regionally extensive MHW event, with high temperatures of 24.0–26.0°C occurring for a number of days, followed by an upwelling event, with temperatures rapidly decreasing to as low as 10°C was recorded along the South African east coast at the end of summer 2021. The South African “ibhloko” (isiXhosa term for “blob”) resulted in extensive fish and invertebrate kills, with numerous species including D. capensis affected (Dayimani, 2021; Department of Environment, Forestry and Fisheries, 2021). Similar events have resulted in fish mortality along the warm-temperate south-east coast, with Hanekom et al. (1989) documenting fish kills, including D. capensis, between 10 January 1984 and March 1989. With predicted increased climate warming and temperature variability in this region, these events could occur more frequently and result in greater numbers of fish kills.
The findings of this study indicate that the suitable thermal habitat for adult D. capensis along the tropical edge of their distribution (in the northern KwaZulu Natal) may be lost if mean summer temperatures increase by 2°C at the beginning of 2100 (Figure 1B). Range contractions such as this may effectively reduce population sizes and even cause population declines (Neuheimer et al., 2011; Wernberg et al., 2011; Smale and Wernberg, 2013; Deutsch et al., 2015). Poleward expansions of Diplodus populations in response to ocean warming have already been observed for D. bellotti from its endemic origin (Senegal to Cape Blanco in Mauritania) in the West African upwelling region to the Atlantic coast of the Iberian Peninsula (Robalo et al., 2020). A poleward range expansion and an equatorward range contraction was also predicted for D. capensis in the Angola Benguela Frontal Zone (Potts et al., 2014). The mechanism for the D. capensis range contraction along the west coast of southern Africa, however, was thought to be driven by changes in reproductive scope, which in turn may have been influenced by adult thermal physiology. When considered together with the findings of this study, it appears that species belonging to the genus Diplodus are susceptible to ocean warming, particularly at their warm water limit and are likely to shift their distributions in future ocean conditions. This may have major implications for the coastal fisheries that rely on these species as demonstrated by Smale et al. (2019) investigating the predicted global effects MHWs have on ecological goods and services.
Collectively, the fHmax thermal indices recorded during this study suggest that when summer in situ daily water temperatures exceed 21.0°C (mean TAB/TQB), adult D. capensis may be physiologically compromised up to an estimated cardiac collapse at 28.3°C (average TARR index). The number of contemporary MHW events were surprisingly high, with maximum temperatures experienced during the hottest MHW event equal to adult D. capensis average TARR thermal index, suggesting that they may already be physiologically vulnerable. Predicted increases in the frequency and intensity of MHWs in this region may ultimately further compromise adult D. capensis by lowering its survival as temperatures exceed the fixed TARR threshold, narrowing their thermal window for acclimation as well as adaptation. Finally, predicted increases in mean summer temperatures beyond 28.3°C at the northern edge of this species range may result in range contraction.
Data Availability Statement
The datasets presented in this study can be found in an online repository. The name of the repository can be found at: Figshare, https://figshare.com/s/bf6aed8173be98286f98.
The animal study was reviewed and approved by Rhodes University Ethics Committee (DIFS van der Walt 2017) and the South African Institute for Aquatic Biodiversity Ethics Committee (SAIAB REF#2016/02).
K-AVDW: investigation, methodology, project administration, formal analysis, data curation, writing—original draft, and visualization. WP: supervision, resources, writing—review and editing, and funding acquisition. FP: supervision and writing—review and editing. AW: investigation, methodology, visualization, and writing—review and editing. MD: formal analysis and writing—review and editing. MS: investigation, methodology, formal analysis, and writing—review and editing. NJ: supervision, resources, writing—review and editing, funding acquisition, and conceptualization. All authors contributed to the article and approved the submitted version.
Research funding for this work was supported by the National Research Foundation (NRF) Research Development Grants for y-rated researchers (UID: 93382). K-AVDW was funded by a NRF Innovation Doctoral Scholarship (UID: 95092) and a NRF Extension Doctoral Scholarship (UID: 111071).
Conflict of Interest
The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
The reviewer SD declared a past co-authorship with one of the authors NJ to the handling Editor.
We thank Amber Childs, Matt van Zyl, Nick Schmidt, Seshnee Reddy, Brett Pringle, Ryan Foster, and Martinus Scheepers for assistance in animal collection and experimentation. We also thank the Rock and Surf Super Pro League (RASSPL) for assisting in animal collection. We hereby acknowledge the support provided by the South African Institute for Aquatic Biodiversity-National Research Foundation (SAIAB-NRF) of South Africa’s Institutional support system and Rhodes University through the use of infrastructure and equipment provided by the Aquatic Ecophysiology Research Platform (AERP) laboratory. We also acknowledge the use of temperature data provided by the Algoa Bay Sentinel Site for LTER of the NRF-SAEON, supported by the Shallow Marine and Coastal Research Infrastructure (SMCRI) initiative of the Department of Science and Innovation (DSI) of South Africa.
The Supplementary Material for this article can be found online at: https://www.frontiersin.org/articles/10.3389/fmars.2021.702463/full#supplementary-material
Abram, P. K., Boivin, G., Moiroux, J., and Brodeur, J. (2017). Behavioural effects of temperature on ectothermic animals: unifying thermal physiology and behavioural plasticity. Biol. Rev. 92, 1859–1876. doi: 10.1111/brv.12312
Anttila, K., Casselman, M. T., Schulte, P. M., and Farrell, A. P. (2013). Optimum temperature in juvenile salmonids: connecting subcellular indicators to tissue function and whole-organism thermal optimum. Physiol. Biochem. Zool. 86, 245–256. doi: 10.1086/669265
Anttila, K., Couturier, C. S., Øverli, Ø, Johnsen, A., Marthinsen, G., Nilsson, G. E., et al. (2014). Atlantic salmon show capability for cardiac acclimation to warm temperatures. Nat. Commun. 5:4252. doi: 10.1038/ncomms5252
Casselman, M. T., Anttila, K., and Farrell, A. P. (2012). Using maximum heart rate as a rapid screening tool to determine optimum temperature for aerobic scope in Pacific salmon Oncorhynchus spp. J. Fish. Biol. 80, 358–377. doi: 10.1111/j.1095-8649.2011.03182.x
Chen, Z., Snow, M., Lawrence, C. S., Church, A. R., Narum, S. R., Devlin, R. H., et al. (2015). Selection for upper thermal tolerance in rainbow trout (Oncorhynchus mykiss Walbaum). J. Exp. Biol. 218, 803–812. doi: 10.1242/jeb.113993
Cooke, S. J., Schreer, J. F., Wahl, D. H., and Philipp, D. P. (2010). Cardiovascular performance of six species of field-acclimatized centrarchid sunfish during the parental care period. J. Exp. Biol. 213, 2332–2342. doi: 10.1242/jeb.030601
Dayimani, M. (2021). Rare Fish Wash Up Along Eastern Cape Beaches After Sea Temperatures Plummet. News24, South Africa, March. Available online at: https://www.news24.com/news24/southafrica/news/see-rare-fish-wash-up-along-eastern-cape-beaches-after-sea-temperatures-plummet-20210309 (accessed March 16, 2021).
Department of Environment, Forestry and Fisheries (2021). East Coast Marine Heatwave and Large Fish and Shellfish Washout. Available online at: https://www.environment.gov.za/mediarelease/heatwaveandfishwalkout_eastcoastmarine (accessed March 16, 2021).
Deutsch, C. A., Tewksbury, J. J., Huey, R. B., Sheldon, K. S., Ghalambor, C. K., Haak, D. C., et al. (2008). Impacts of climate warming on terrestrial ectotherms across latitude. Proc. Natl. Acad. Sci. U.S.A. 105, 6668–6672. doi: 10.1073/pnas.0709472105
Doney, S. C., Ruckelshaus, M., Emmett Duffy, J., Barry, J. P., Chan, F., English, C. A., et al. (2012). Climate change impacts on marine ecosystems. Ann. Rev. Mar Sci. 4, 11–37. doi: 10.1146/annurev-marine-041911-111611
Drost, H. E., Carmack, E. C., and Farrell, A. P. (2014). Upper thermal limits of cardiac function for Arctic cod Boreogadus saida, a key food web fish species in the Arctic Ocean. J. Fish Biol. 84, 1781–1792. doi: 10.1111/jfb.12397
Duncan, M. I., Bates, A. E., James, N. C., and Potts, W. M. (2019). Exploitation may influence the climate resilience of fish populations through removing high performance metabolic phenotypes. Sci. Rep. 9:11437. doi: 10.1038/s41598-019-47395-y
Ekström, A., Gräns, A., and Sandblom, E. (2019). Can’t beat the heat? Importance of cardiac control and coronary perfusion for heat tolerance in rainbow trout. J. Comp. Physiol. B 189, 757–769. doi: 10.1007/s00360-019-01243-7
Eliason, E. J., and Anttila, K. (2017). “Temperature and the cardiovascular system,” in Fish Physiology, eds A. K. Gamperl, T. E. Gills, A. P. Farrell, and C. J. Brauner (London: Academic Press), 235–297. doi: 10.1016/bs.fp.2017.09.003
Eliason, E. J., Clark, T. D., Hague, M. J., Hanson, L. M., Gallagher, Z. S., Jeffries, K. M., et al. (2011). Differences in thermal tolerance among sockeye salmon populations. Science 332, 109–112. doi: 10.1126/science.1199158
Eliason, E. J., Clark, T. D., Hinch, S. G., and Farrell, A. P. (2013). Cardiorespiratory performance and blood chemistry during swimming and recovery in three populations of elite swimmers: adult sockeye salmon. Comp. Biochem. Physiol. A Mol. Integr. Physiol. 166, 385–397. doi: 10.1016/j.cbpa.2013.07.020
Farrell, A. P. (2009). Environment, antecedents and climate change: lessons from the study of temperature physiology and river migration of salmonids. J. Exp. Biol. 212, 3771–3780. doi: 10.1242/jeb.023671
Ferreira, E. O., Anttila, K., and Farrell, A. P. (2014). Thermal optima and tolerance in the eurythermic goldfish (Carassius auratus): relationships between whole-animal aerobic capacity and maximum heart rate. Physiol. Biochem. Zool. 87, 599–611. doi: 10.1086/677317
Fey, S. B., Siepielski, A. M., Nussle, S., Cervantes-Yoshida, K., Hwan, J. L., Huber, E. R., et al. (2015). Recent shifts in the occurrence, cause, and magnitude of animal mass mortality events. Proc. Natl. Acad. Sci. U.S.A. 112, 1083–1088. doi: 10.1073/pnas.1414894112
Fricke, R., Golani, D., and Appelbaum-golani, B. (2016). Diplodus levantinus (Teleostei: Sparidae), a new species of sea bream from the southeastern Mediterranean Sea of Israel, with a checklist and a key to the species of the Diplodus sargus species group. Sci. Mar. 80, 305–320. doi: 10.3989/scimar.04414.22B
Genin, A., Levy, L., Sharon, G., Raitsos, D. E., and Diamant, A. (2020). Rapid onsets of warming events trigger mass mortality of coral reef fish. Proc. Natl. Acad. Sci. U.S.A. 117, 25378–25385. doi: 10.1073/pnas.2009748117
Gilbert, M. J. H. (2020). Thermal Limits to the Cardiorespiratory Performance of Arctic Char (Salvelinus alpinus) in a Rapidly Warming North. Ph.D. thesis. Vancouver, VBC: University of British Columbia.
Götz, A., Kerwath, S. E., Attwood, C. G., and Sauer, W. H. H. (2008). Effects of fishing on population structure and life history of roman Chrysoblephus laticeps (Sparidae). Mar. Ecol. Prog. Ser. 362, 245–259. doi: 10.3354/meps07410
Gupta, A. S., Thomsen, M., Benthuysen, J. A., Hobday, A. J., Oliver, E., Alexander, L. V., et al. (2020). Drivers and impacts of the most extreme marine heatwave events. Sci. Rep. 10:19359. doi: 10.1038/s41598-020-75445-3
Hanekom, N., Hutchings, L., Joubert, P. A., and Van Der Byl, P. C. N. (1989). Sea temperature variations in the Tsitsikamma coastal National Park, South Africa, with notes on the effect of cold conditions on some fish populations. South Afr. J. Mar. Sci. 8, 145–153. doi: 10.2989/02577618909504557
Hansen, A. K., Byriel, D. B., Jensen, M. R., Steffensen, J. F., and Svendsen, M. B. S. (2017). Optimum temperature of a northern population of Arctic charr (Salvelinus alpinus) using heart rate Arrhenius breakpoint analysis. Polar Biol. 40, 1063–1070. doi: 10.1007/s00300-016-2033-8
Hobday, A. J., Alexander, L. V., Perkins, S. E., Smale, D. A., Straub, S. C., Oliver, E. C. J., et al. (2016). A hierarchical approach to defining marine heatwaves. Prog. Oceanogr. 141, 227–238. doi: 10.1016/j.pocean.2015.12.014
Huey, R. B., Kearney, M. R., Krockenberger, A., Holtum, J. A. M., Jess, M., and Williams, S. E. (2012). Predicting organismal vulnerability to climate warming: roles of behaviour, physiology and adaptation. Philos. Trans. R. Soc. Lond. B Biol. Sci. 367, 1665–1679. doi: 10.1098/rstb.2012.0005
Kemp, J. O. G. (2009). Effects of temperature and salinity on resting metabolism in two South African rock pool fish: the resident gobiid Caffrogobius caffer and the transient sparid Diplodus sargus capensis. Afr. Zool. 44, 151–158. doi: 10.1080/15627020.2009.11407449
Klerks, P. L., Athrey, G. N., and Leberg, P. L. (2019). Response to selection for increased heat tolerance in a small fish species, with the response decreased by a population bottleneck. Front. Ecol. Evol. 7:270. doi: 10.3389/fevo.2019.00270
Leeuwis, R. H. J., Zanuzzo, F. S., Peroni, E. F. C., and Gamperl, A. K. (2021). Research on sablefish (Anoplopoma fimbria) suggests that limited capacity to increase heart function leaves hypoxic fish susceptible to heatwaves. Proc. Biol. Sci. 288:20202340. doi: 10.1098/rspb.2020.2340
Lefevre, S. (2016). Are global warming and ocean acidification conspiring against marine ectotherms? A meta-analysis of the respiratory effects of elevated temperature, high CO2 and their interaction. Conserv. Physiol. 4:cow009. doi: 10.1093/conphys/cow009
Madeira, D., Narciso, L., Cabral, H. N., and Vinagre, C. (2012). Thermal tolerance and potential impacts of climate change on coastal and estuarine organisms. J. Sea Res. 70, 32–41. doi: 10.1016/j.seares.2012.03.002
Madeira, D., Narciso, L., Cabral, H. N., Vinagre, C., and Diniz, M. S. (2013). Influence of temperature in thermal and oxidative stress responses in estuarine fish. Comp. Biochem. Physiol. A Mol. Integr. Physiol. 166, 237–243. doi: 10.1016/j.cbpa.2013.06.008
Mann, B. Q. (1992). Aspects of the Biology of Two Inshore Sparid Fishes (Diplodus Sargus Capensis and Diplodus Cervinus Hottentotus) Off The South-East Coast of South Africa. Ph.D. thesis. Grahamstown: Rhodes University.
Maree, R. C., Whitfield, A. K., and Booth, A. J. (2000). Effect of water temperature on the biogeography of South African estuarine fishes associated with the subtropical/warm temperate subtraction zone. South Afr. J. Sci. 96, 184–188.
Morgan, R., Finnøen, M. H., Jensen, H., Pélabon, C., and Jutfelt, F. (2021). Low potential for evolutionary rescue from climate change in a tropical fish. Proc. Natl. Acad. Sci. U.S.A. 117, 33365–33372. doi: 10.1073/PNAS.2011419117
Muller, C., Childs, A., Duncan, M. I., Skeeles, M. R., James, N. C., van der Walt, K., et al. (2020). Implantation, orientation and validation of a commercially produced heart-rate logger for use in a perciform teleost fish. Conserv. Physiol. 8:coaa035. doi: 10.1093/conphys/coaa035
Murchie, K. J., Cooke, S. J., Danylchuk, A. J., Danylchuk, S. E., Goldberg, T. L., Suski, C. D., et al. (2013). Movement patterns of bonefish (Albula vulpes) in tidal creeks and coastal waters of Eleuthera, The Bahamas. Fish. Res. 147, 404–412. doi: 10.1016/j.fishres.2013.03.019
Mylonas, C. C., Cardinaletti, G., Sigelaki, I., and Polzonetti-Magni, A. (2005). Comparative efficacy of clove oil and 2-phenoxyethanol as anesthetics in the aquaculture of European sea bass (Dicentrarchus labrax) and gilthead sea bream (Sparus aurata) at different temperatures. Aquaculture 246, 467–481. doi: 10.1016/j.aquaculture.2005.02.046
Oliver, E. C. J., Burrows, M. T., Donat, M. G., SenGupta, A., Alexander, L. V., Perkins-kirkpatrick, S. E., et al. (2019). Projected marine heatwaves in the 21st century and the potential for ecological impact. Front. Mar. Sci. 6:734. doi: 10.3389/fmars.2019.00734
Pörtner, H. O., Bock, C., Knust, R., Lannig, G., Lucassen, M., Mark, F. C., et al. (2008). Cod and climate in a latitudinal cline: physiological analyses of climate effects in marine fishes. Clim. Res. 37, 253–270. doi: 10.3354/cr00766
Potts, W. A., Gotz, A., and James, N. C. (2015). Review of the projected impacts of climate change on coastal fishes in southern Africa. Rev. Fish Biol. Fish. 25, 603–630. doi: 10.1007/s11160-015-9399-5
Potts, W. M., Booth, A. J., Richardson, T. J., and Sauer, W. H. H. (2014). Ocean warming affects the distribution and abundance of resident fishes by changing their reproductive scope. Rev. Fish Biol. Fish. 24, 493–504. doi: 10.1007/s11160-013-9329-3
Reynolds, R. W., Smith, T. M., Liu, C., Chelton, D. B., Casey, K. S., and Schlax, M. G. (2007). Daily high-resolution-blended analyses for sea surface temperature. J. Clim. 20, 5473–5496. doi: 10.1175/2007JCLI1824.1
Robalo, J. I., Francisco, S. M., Vendrell, C., Lima, C. S., Pereira, A., Brunner, B. P., et al. (2020). Against all odds: a tale of marine range expansion with maintenance of extremely high genetic diversity. Sci. Rep. 10:12707. doi: 10.1038/s41598-020-69374-4
Roberts, M. J. (2005). Chokka squid (Loligo vulgaris reynaudii) abundance linked to changes in South Africa’s Agulhas Bank ecosystem during spawning and the early life cycle. ICES J. Mar. Sci. 62, 33–55. doi: 10.1016/j.icesjms.2004.10.002
Sandblom, E., Clark, T. D., Grans, A., Ekström, A., Brijs, J., Sundstrom, L. F., et al. (2016). Physiological constraints to climate warming in fish follow principles of plastic floors and concrete ceilings. Nat. Commun. 7:11447. doi: 10.1038/ncomms11447
Schlegel, R., Oliver, E. J., Wernberg, T., and Smit, A. (2017). Coastal and offshore co-occurrences of marine heatwaves and cold-spells. Prog. Oceanogr. 151, 189–205. doi: 10.1016/j.pocean.2017.01.004
Sinclair, B. J., Marshall, K. E., Sewell, M. A., Levesque, D. L., Willett, C. S., Slotsbo, S., et al. (2016). Can we predict ectotherm responses to climate change using thermal performance curves and body temperatures? Ecol. Lett. 19, 1372–1385. doi: 10.1111/ele.12686
Skeeles, M. R., Winkler, A. C., Duncan, M. I., James, N. C., van der Walt, K., and Potts, W. M. (2020). The use of internal heart rate loggers in determining cardiac breakpoints of fish. J. Therm. Biol. 89:102524. doi: 10.1016/j.jtherbio.2020.102524
Smale, D. A., Wernberg, T., Oliver, E. C. J., Thomsen, M., Harvey, B. P., Straub, S. C., et al. (2019). Marine heatwaves threaten global biodiversity and the provision of ecosystem services. Nat. Clim. Chang. 9, 306–312. doi: 10.1038/s41558-019-0412-1
Smit, A. J., Roberts, M., Anderson, R. J., Dufois, F., and Dudley, S. F. J. (2013). A coastal seawater temperature dataset for biogeographical studies: large biases between in situ and remotely-sensed data sets around the coast of South Africa. PLoS One 8:e81944. doi: 10.1371/journal.pone.0081944
Steinhausen, M. F., Sandblom, E., Eliason, E. J., Verhille, C., and Farrell, A. P. (2008). The effect of acute temperature increases on the cardiorespiratory performance of resting and swimming sockeye salmon (Oncorhynchus nerka). J. Exp. Biol. 211, 3915–3926. doi: 10.1242/jeb.019281
van der Walt, K., Porri, F., Potts, W., Duncan, M., and James, N. (2021). Thermal tolerance, safety margins and vulnerability of coastal species: projected impact of climate change induced cold water variability in a temperate African region. Mar. Environ. Res. 169:105346. doi: 10.1016/j/marenvres.2021.105346
Wernberg, T., Russell, B. D., Moore, P. J., Ling, S. D., Smale, D. A., Campbell, A., et al. (2011). Impacts of climate change in a global hotspot for temperate marine biodiversity and ocean warming. J. Exp. Mar. Biol. Ecol. 400, 7–16. doi: 10.1016/j.jembe.2011.02.021
Keywords: ocean warming, marine heatwaves, maximum heart rate, acute warming event, Sparidae, thermal physiology
Citation: van der Walt K-A, Potts WM, Porri F, Winkler AC, Duncan MI, Skeeles MR and James NC (2021) Marine Heatwaves Exceed Cardiac Thermal Limits of Adult Sparid Fish (Diplodus capensis, Smith 1884). Front. Mar. Sci. 8:702463. doi: 10.3389/fmars.2021.702463
Received: 29 April 2021; Accepted: 03 June 2021;
Published: 29 June 2021.
Edited by:Mansour Torfi Mozanzadeh, South Iran Aquaculture Research Center, Iran
Reviewed by:Sajjad Pourmozaffar, Iranian Fisheries Research Organization, Iran
Simon Morley, British Antarctic Survey (BAS), United Kingdom
Shaun H. P. Deyzel, Elwandle Coastal Node, South African Environmental Observation Network, South Africa
Copyright © 2021 van der Walt, Potts, Porri, Winkler, Duncan, Skeeles and James. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
*Correspondence: Kerry-Ann van der Walt, [email protected] | <urn:uuid:7fef6a84-06b9-486c-bd42-b944b28e9971> | CC-MAIN-2021-31 | https://www.frontiersin.org/articles/10.3389/fmars.2021.702463/full | s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046152168.38/warc/CC-MAIN-20210727010203-20210727040203-00612.warc.gz | en | 0.853829 | 15,452 | 2.671875 | 3 |
There are two main types of diabetes out there, each equally dangerous and fully capable of causing a lot of damage. If you want to be able to effectively manage your diabetes, you need to have the right information. Take a look at the following tips and see how easy it is to live a healthy life with diabetes.
Exercise is a key lifestyle habit for a diabetic person. You need to get moving as much as possible to help keep your weight at a reasonable level and your organs in tip-top shape. Try to go for a long walk after dinner or take the stairs at work instead of the elevator.
Make sure to take your diabetes medications exactly as directed. You are NOT a doctor, nor is anyone else giving you advice other than your physician. They tell you how often to take your prescriptions and how much you should take at a time because they know, so follow their directions.
If you’re looking for a fitness class to help you lose weight to deal with your Diabetes, try the hospital! They often offer aerobics classes for people with various issues, like seniors or the morbidly obese, or regular fitness classes through outreach programs. Ask your doctor to find out if these are available to you or apply directly.
Anyone with diabetes must exercise to stay in good health. Exercise helps strengthen the cardiovascular system and helps to increase the circulation to the arms and legs. It also helps to control blood sugar levels. The best forms of exercise for someone with diabetes is jogging, swimming, walking, and rowing.
Find a free clinic in your area to have your Diabetes monitored if you can’t afford to visit your doctor every three months. You can call your local Diabetes association, ask at a local hospital, or inquire through your Health Department, to find out where the closest clinic is to you.
If you’re working to lose weight and keep your Diabetes in check but can’t find any healthy breakfast options with protein that you enjoy, try a smoothie. You can buy protein powder at a health food store (make sure to ask if it has any sugar or artificial sweeteners) and you can put a scoop in to up the nutritional punch!
If you want to eat healthier to help overcome your Diabetes, but you just can’t stomach fish without some pops of flavor on it, try capers! They’re like olives in their flavor, but smaller and zestier. You can sprinkle them on any type of fish, I like to also add some slices of Spanish onion, and they take the place of sauce.
Want a tasty treat that won’t be forbidden by your doctor due to your Diabetes? Try nachos! Use a low fat cheese, low fat sour cream, homemade guacamole, and salsa, and you’ll be getting a ton of nutrition with a burst of flavor. If you add some beans to the salsa you’ll have an even healthier snack!
One of the goals in managing diabetes is to be able to live the lifestyle that you want. The more you can do to lead a normal lifestyle, the better your odds are of avoiding the dangerous side effects of having this potentially debilitating disease. | <urn:uuid:5c58a1c1-6d7f-42f5-8a1b-164df0ab6364> | CC-MAIN-2020-34 | http://srmlabsolutions.com/the-myths-and-truths-of-living-with-diabetes.html | s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439740679.96/warc/CC-MAIN-20200815035250-20200815065250-00444.warc.gz | en | 0.942988 | 657 | 2.546875 | 3 |
Gaucher disease (Gaucher's disease)
Gaucher, also referred to as Gaucher's is an autosomal recessive disease and the most prevalent Lysosomal Storage Disorder (LSD) and is present in approximately 1 in 20,000 live births. Gaucher disease, also known as glucocerebrosidase deficiency, occurs when a certain lipid, glucosylceramide, accumulates in the bone marrow, lungs, spleen, liver and sometimes the brain. Despite the fact that the disease consists of a phenotype, with varying degrees of severity, it has been sub-divided in three subtypes according to the presence or absence of neurological involvement.
Gaucher type 1, the most common form of the disease, may present with chronic fatigue, easy bruising and bleeding, bone involvement due to bone infarctions or pathological fractures due to osteopenia. The neuronopathic forms of Gaucher are types 2 and 3 and are the rarest.
Although Gaucher is pan-ethnic, Gaucher disease type 1 is the most common inherited Jewish genetic disease affecting Ashkenazi Jewish people (Eastern, Central and Northern European ancestry). Approximately 1 in 450 have Gaucher and 1 in 10 are carriers.
Treatments and drug choices for Gaucher disease types 1 and 3 may vary depending on the severity of each patient's disease and the course of treatment your physician determines.
Learn more about Gaucher disease treatments and symptoms by clicking below.
The National Gaucher Foundation, Inc
We are the only independent, non-profit organization of its kind serving the Gaucher community in the US. Founded in 1984, the NGF funded millions of research dollars toward the cause, treatments and cure for Gaucher disease. We are an objective, independent voice of the Gaucher community, providing leadership, outreach and innovative thinking. The number of families and individuals affected by Gaucher are ever increasing, requiring extensive programs and services.
The NGF provides help by granting financial assistance and supporting legislation for Gaucher and other rare diseases. We host meetings, conferences and outreach events and supply marketing programs designed to promote awareness of Gaucher disease. Additionally, we provide a Mentor program, educational videos, brochures, exercise tips, and many other services and resources.
For more information, contact [email protected]
or call 877-649-2742. | <urn:uuid:f0796aa3-cd3a-4b64-ac8d-75011add36ed> | CC-MAIN-2014-41 | http://www.gaucherdisease.org/ | s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657131545.81/warc/CC-MAIN-20140914011211-00042-ip-10-196-40-205.us-west-1.compute.internal.warc.gz | en | 0.931427 | 494 | 3.296875 | 3 |
Changes in law aim to protect kids’ internet privacy
Data known as “persistent identifiers,” which allow a child to be tracked over time and across websites, no longer can be collected without a parent’s permission, under the new rules.
Aiming to prevent companies from exploiting online information about children under 13, the Obama administration on Dec. 19 imposed sweeping changes in regulations designed to protect a young generation with easy access to the web.
But some critics of the changes worry they could stifle innovation in the market for educational apps.
Two years in the making, the amended rules to the decade-old Children’s Online Privacy Protection Act go into effect in July. Internet privacy advocates said the changes were long overdue in an era of cell phones, tablets, social networking services, and online stores with cell-phone apps aimed at kids for as little as 99 cents.
Siphoning details of children’s personal lives—their physical location, contact information, names of friends, and more—from their internet activities can be highly valuable to advertisers, marketers, and data brokers.
The Obama administration has largely refrained from issuing regulations that might stifle growth in the technology industry, one of the U.S. economy’s brightest spots.
Yet the Federal Trade Commission pressed ahead with the new kids’ internet privacy guidelines, despite loud complaints—particularly from small businesses and developers of educational apps—that the revisions would be too costly to comply with and would cause responsible companies to abandon the children’s app marketplace.
As evidence of internet privacy risks, the FTC last week said it was investigating an unspecified number of software developers that might have gathered information illegally without the consent of parents.
(Next page: What the new rules say) | <urn:uuid:2c969921-ea82-4733-b894-b94f71a4fd60> | CC-MAIN-2015-48 | http://www.eclassroomnews.com/2012/12/20/changes-in-law-aim-to-protect-kids-internet-privacy/ | s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398525032.0/warc/CC-MAIN-20151124205525-00003-ip-10-71-132-137.ec2.internal.warc.gz | en | 0.939705 | 364 | 2.59375 | 3 |
cover photo from Shoot for Science by Deepak Kakara , Dinesh Yadav Sukanya Olkar, and Parijat Si
By Sebastian Sturm – Here in the NICE lab, we work on Rhagoletis pomonella – the apple maggot. The apple maggot is the larva of a fly which is specialized to a particular food source. Surprise: It’s the hawthorn berry. To us, hawthorn berries and apples appear very different. Not so to the apple maggot fly – it likes to lay its eggs in both. However, they first experienced apples once European settlers introduced the fruits to North America. Before that they were only hawthorn berry maggots. Rhagoletis is a member of the family Tephritidae – fruit flies – but unlike Drosophila which is another fly family. The Tephritids are a family with more than 4000 members that cause enormous damage in agriculture.
Insects are the most diverse group of animals – not only regarding their number of species but also regarding their various life styles. Insects can be found in the air, water and land – ranging from desserts to glaciers. Just as diverse as their habitats are their food sources: liquids and solids of plant or animal origin. Insect species are often highly specialized to a particular source. Hence it is not surprising that their digestive system exhibits one or another interesting structures and adaptations – like the crop.
The first time I was dissecting the nervous system of an adult fly of Rhagoletis pomonella I was surprised by the staggering size of its crop – which literally filled the entire body of the fly. Because it was extensively contracting it provided an extraordinary view. I called my lab mates to have a look. Disgust mixed with curiosity: “What the heck is that?”
The crop of Rhagoletis pomonella is a large, two-lobed, thin-walled and transparent balloon. It is located between the oesophagus (the pipe for the initial food transport) and the proventriculus, a muscular ring which functions as a valve between foregut and midgut. In flies like Rhagoletis and Drosophila, the crop is connected to the oesophagus via a junction. The fly ingests the liquid food along with digestive enzymes secreted from glands. This solution is stored in the crop, which is contractile and thoroughly mixes its content and passes it to the proventriculus. The proventriculus regulates the further transport in the midgut. Therefore, the general function of crop and proventriculus correspond to our stomach.
Some insect groups utilize their crop as a defensive mechanism. They can bring up previously swallowed food to discourage and disgust predators. Locusts and cockroaches for instance throw up a very dark unappetizing goop when they feel threatened and captured. I feel that cockroaches themselves are already unsavoury enough, but I guess they’d rather be safe than sorry. | <urn:uuid:e4345332-bc40-4f02-b8db-9b4ce5c34038> | CC-MAIN-2021-31 | https://nice.ncbs.res.in/2019/10/16/nice-anatomy-lessons-about-crop-pests-crops/ | s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154302.46/warc/CC-MAIN-20210802012641-20210802042641-00611.warc.gz | en | 0.956762 | 633 | 3.3125 | 3 |
|Cuprite||Cu2O||See question 2|
|Chalcocite||Cu2S||See question 2|
1. Place these products in the extraction process in ascending order of copper content.
- Cathode copper
- Anode copper
- Copper concentrate
- Fire refined copper
The correct order is:
- Ore (0.25 to 0.5% copper)
- Concentrate (30% copper)
- Matte (50-70% copper)
- Blister (98% copper)
- Fire refined (99% copper)
- Anode (99% copper)
- Cathode (99.99% copper)
2. Use a periodic table to find the atomic masses of the elements in cuprite and chalcocite. Then work out the percentage of copper in each mineral. Don’t forget to double the copper mass as it is Cu2.
The masses are:
Ar(Cu) = 64; Ar(O) = 16; Ar(S) = 32
The molar mass of Cu2O = 128 + 16 = 144
The percentage of copper in Cu2O is therefore 128/144 x 100 = 89%
The molar mass of Cu2S = 128 + 32 = 160
The percentage of copper in Cu2S is therefore 128/160 x 100 = 80%
3. Which of the minerals in the table are sulfides?
Chalcocite, bornite and chalcopyrite.
4. Why is sulfur dioxide scrubbed from the smelter flue gases?
The sulfur dioxide produces sulfuric acid in the acid plant. It would cause acid rain if released into the atmosphere as well as serious health risk to anyone living anywhere near the furnaces. The sulfuric acid has a high value because it is used to leach copper oxide ores. | <urn:uuid:00f8a061-47b2-430c-b4ae-20aa0e4f9815> | CC-MAIN-2019-22 | https://copperalliance.org.uk/knowledge-base/education/education-resources/copper-mining-extraction-sulfide-ores-answers/ | s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232255182.37/warc/CC-MAIN-20190519221616-20190520003616-00046.warc.gz | en | 0.799746 | 394 | 3.515625 | 4 |
Some women develop hypothyroidism during or after pregnancy (postpartum hypothyroidism), often because they produce antibodies to their own thyroid gland. Left untreated, hypothyroidism increases the risk of miscarriage, premature delivery and preeclampsia — a condition that causes a significant rise in a woman's blood pressure during the last three months of pregnancy.
Thyroid problems in a pregnant woman can affect the developing baby. During the first three months of pregnancy, the baby receives all of its thyroid hormone from its mother. If the mother has hypothyroidism, the baby does not get enough thyroid hormone. This can lead to problems with mental development.
Key words: hypothyroid mom. | <urn:uuid:35c4334b-cc06-46a5-a0a8-3279c7cf61f7> | CC-MAIN-2020-10 | https://qa.healthtopquestions.com/35643/can-i-be-a-hypothyroid-mom | s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875145897.19/warc/CC-MAIN-20200224040929-20200224070929-00220.warc.gz | en | 0.953054 | 144 | 2.984375 | 3 |
September is National Childhood Obesity Awareness Month, which is an extremely important topic of discussion. More than 23 million children and teenagers in the U.S. are obese, and while obesity rates soar among all age groups, it is a particularly grave concern for children.
Childhood obesity puts 1/3 of America’s children at risk for several serious health conditions:
- Increased risks of high cholesterol, high blood pressure and heart disease.
- Increased risks of insulin resistance, type 2 diabetes and impaired glucose tolerance.
- Increased risk of breathing difficulties, including asthma and sleep apnea.
- Increased risks of gastroesophageal reflux disease (GERD), liver disease and gallstones.
The good news is that childhood obesity is completely preventable. We can all be a part of the solution by taking simple steps to encourage our children to lead the healthiest lives possible.
In honor of National Childhood Obesity Awareness Month, we’re encouraging your family to make healthy changes together!
Get inspired to kick your family’s health in gear with Intermountain LiVe Well! We’ve got all kinds of ideas on fostering healthy habits and eating well along with a whole collection of Healthy Hikes the whole family can enjoy.
How do you encourage your kids to lead healthy lifestyles? Let us know in the comments below! | <urn:uuid:6c7e44f5-4875-4bf1-88b0-090e415bdd4c> | CC-MAIN-2017-22 | https://intermountainhealthcare.org/blogs/2014/09/national-childhood-obesity-awareness-month-simple-steps-to-a-solution/ | s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463607802.75/warc/CC-MAIN-20170524055048-20170524075048-00631.warc.gz | en | 0.918378 | 276 | 3.125 | 3 |
One of the most important and powerful uses of geospatial tools is for disaster response and mitigation. This issue of Apogeo takes direct aim at this subject, from several different angles. As I write this, the incredible country of Nepal has endured not one, but two massive deadly earthquakes, on April 25 (7.8 magnitude) and again on May 12 (7.3 magnitude). In addition to the devastating loss of life, with over 8,000 killed, 19,000 injured, and countless homeless, centuries-old buildings were destroyed at UNESCO World Heritage sites in the Kathmandu Valley, including some at the Kathmandu Durbar Square, the Patan Durbar Square, the Bhaktapur Durbar Square, the Changu Narayan Temple and the Swayambhunath Stupa.
Like many people, I have traveled there. In 1998, I spent a few months in Nepal, trekking the Annapurna Sanctuary (see photos), which was enlightening for me in many ways. This area is known as one of the most spiritual in the world, and I did see that the people seemed more at peace than most, while most of them lived in extreme poverty. It was a fascinating juxtaposition that I have since studied, and find to be very important. My heart goes out to all who are affected.
One important note from On the Edge columnist Hans-Peter Plag is that these extreme events are more disastrous in areas like Nepal. He notes here, “The impact of earthquakes is amplified in regions with poor building standards, which often coincide with poverty and corruption. As a result, the deadliest earthquakes on record are mostly not the largest in magnitude.”
Appropriately, the focus has shifted from disaster response to disaster risk management and mitigation for geospatial companies and NGOs, according to speakers at the Secure World Foundation salon on Disaster Risk Management during the National Space Symposium in April 2015. UN-SPIDER’s Dr. Shirish Ravan shared that helping at-risk communities before disasters is the focus now, which was reflected also in Taner Kodanaz’ comments regarding the efforts of DigitalGlobe’s “Seeing a Better World” Program. Read about these and additional perspectives by NASA’s Dr. David Green and Airbus’ Joerg Herrmann, in the article here.
The amount of time and resources that the for-profit companies can invest in disaster response emergencies is limited, of course, even though lives are on the line. This ethical dilemma is discussed in our Executive Interview with David Hartshorn, who is Director General of the Global VSAT Forum. While he has been entrenched in the satellite communications industry, he sees a tie with the Earth observations community and has a vested interest in working together, because of the way that the companies respond to disasters. You will find this interview here.
Thanks for reading!
Myrna James Yoo, Publisher | <urn:uuid:169b7bae-5dc8-4786-9ce4-2efe594bca12> | CC-MAIN-2019-18 | http://apogeospatial.com/priorities-changing-for-disaster-response/ | s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578613888.70/warc/CC-MAIN-20190423214818-20190423235841-00053.warc.gz | en | 0.960751 | 608 | 2.578125 | 3 |
Submitted to: International Journal of Systematic and Evolutionary Microbiology
Publication Type: Peer Reviewed Journal
Publication Acceptance Date: June 10, 2011
Publication Date: April 1, 2012
Citation: Davis, R.E., Zhao, Y., Dally, E.L., Jomantiene, R., Lee, I., Wei, W., Kitajima, E. 2012. 'Candidatus Phytoplasma sudamericanum' a novel taxon from diseased passion fruit (Passiflora edulis f. flavicarpa Deg.). International Journal of Systematic and Evolutionary Microbiology. 62:984-989. Interpretive Summary: Phytoplasmas are very small bacteria that are responsible for hundreds of diseases affecting agriculturally important plants around the world. There is need to improve methods and tools for detecting and identifying phytoplasmas in order to aid efforts to curb the spread of diseases caused by phytoplasmas and to prevent the introduction of foreign phytoplasmas into U.S. agriculture. The present work was initiated to expand knowledge concerning phytoplasmas that infect edible fruit-bearing plants. The work focused on disease of passion fruit in Brazil, where passion fruit is widely grown commercially, is valued as a nutritious source of vitamins, and is used in producing fruit juices. Using DNA-based molecular methods for detection and identification, we discovered that diseased plants of passion fruit were infected by two different phytoplasmas. We found that one is a previously unknown phytoplasma that is related to the phytoplasma that causes X-disease of stone fruit trees, such as peach trees, in the U.S. and Europe. We found that the second represents a previously unknown phytoplasma species. We report DNA markers for both phytoplasmas, and we describe molecular features of the new phytoplasma species, for which we propose the name ‘Candidatus Phytoplasma sudamericanum’. The results of our study provide new information about, and provide new molecular markers for, detection and identification of the two previously unknown phytoplasmas. This advance is significant in part because the complete plant host ranges of the two phytoplasmas are not yet known, and because the phytoplasmas might have potential to invade passion fruit and other agricultural crops in the U.S. This work will interest scientists and students studying plant diseases, diagnostics companies and centers involved in pathogen detection, companies producing disease-free fruit trees, fruit growers and juice producers, and government agencies that implement plant quarantine regulations to prevent the introduction of foreign pests and diseases into U.S. agriculture.
Technical Abstract: Symptoms of abnormal proliferation of shoots resulting in formation of witches’ broom growths were observed in diseased plants of passion fruit (Passiflora edulis f. flavicarpa Deg.) in Brazil. RFLP analysis of 16S rRNA gene sequences amplified in polymerase chain reactions containing template DNAs extracted from diseased plants collected in Bonita, PE, and Vicosa, MG, Brazil, indicated that such symptoms were associated with infections by two mutually distinct phytoplasmas. One phytoplasma, PassWB-Br4 from Bonita, represents a new subgroup, 16SrIII-U, in the X-disease phytoplasma group (‘Candidatus Phytoplasma pruni’-related strains). The second phytoplasma, PassWB-Br3 from Vicosa, represents a previously undescribed subgroup in group 16SrVI. Phylogenetic analyses of 16S rRNA gene sequences were consistent with the hypothesis that strain PassWB-Br3 is distinct from previously described ‘Ca. Phytoplasma’ species. Nucleotide sequence alignments revealed that strain PassWB-Br3 shared less than 97.5 % similarity of 16S rDNA with previously described ‘Candidatus Phytoplasma’ species. The unique properties of DNA, in addition to natural host and geographical occurrence, support the recognition of strain PassWB-Br3 as a representative of a novel taxon, ‘Candidatus Phytoplasma sudamericanum’. | <urn:uuid:c7bdc085-84cc-4f57-8105-ecf3360b51d3> | CC-MAIN-2015-48 | http://www.ars.usda.gov/research/publications/publications.htm?SEQ_NO_115=265368 | s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398450715.63/warc/CC-MAIN-20151124205410-00211-ip-10-71-132-137.ec2.internal.warc.gz | en | 0.924107 | 913 | 2.546875 | 3 |
The learning aims for Loyola's literacy/reading specialist program are as follows:
- Candidates have knowledge of the foundations of reading and writing processes and instruction.
- Candidates use a wide range of instructional practices, approaches, methods, and curriculum materials to support reading and writing instruction.
- Candidates use a variety of assessment tools and practices to plan and evaluate effective reading instruction.
- Candidates create a literate environment that fosters reading and writing by integrating foundational knowledge, use of instructional practices, approaches and methods, curriculum materials, and the appropriate uses of assessments.
- Candidates view professional development as a career-long effort and responsibility. | <urn:uuid:f4a86971-ee95-4e3c-bbf5-58c50582b5d8> | CC-MAIN-2017-26 | http://www.loyola.edu/school-education/academics/graduate/literacy-reading/learning-aims | s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320215.92/warc/CC-MAIN-20170624031945-20170624051945-00290.warc.gz | en | 0.913472 | 135 | 2.84375 | 3 |
How do we have any idea what is going on in a horse's brain? Of course we can not ask a horse how they feel, or if they remember the task we asked of them the day before, but we can use technical tools to measure and use our observation through:
Science - is a systematic enterprise that builds and organizes knowledge in the form of testable explanations and predictions.
Natural science - a branch of science concerned with the description, prediction, and understanding of natural phenomena, based on empirical evidence from observation and experimentation. Mechanisms such as peer review and repeatability of findings are used to try to ensure the validity of scientific advances.
Biological Markers - through chemicals tested in blood ,feces, and urine.
Biology is concerned with the characteristics, classification and behaviors of organisms, as well as how species were formed and their interactions with each other and the environment.
Understanding the similarities and differences of the mammalian brain offers clearer insight to the form and function of the brain and central nervous system of each species. | <urn:uuid:c78cda98-e2b5-4467-b1d5-9e0acb28f359> | CC-MAIN-2019-51 | http://equine-neuroethology.com/brain-central-nervous-system/ | s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540490743.16/warc/CC-MAIN-20191206173152-20191206201152-00171.warc.gz | en | 0.951603 | 212 | 3.59375 | 4 |
The effect of habitat complexity on the functional response of a seed-eating passerine
Baker, David J.; Stillman, Richard A.; Bullock, James M.. 2009 The effect of habitat complexity on the functional response of a seed-eating passerine. Ibis, 151 (3). 547-558. 10.1111/j.1474-919X.2009.00941.xFull text not available from this repository.
Recent population declines of seed-eating farmland birds have been associated with reduced overwinter survival due to reductions in food supply. An important component of predicting how food shortages will affect animal populations is to measure the functional response, i.e. the relationship between food density and feeding rate, over the range of environmental conditions experienced by foraging animals. Crop stubble fields are an important foraging habitat for many species of seed-eating farmland bird. However, some important questions remain regarding farmland bird foraging behaviour in this habitat, and in particular the effect of stubble on farmland bird functional responses is unknown. We measured the functional responses of a seed-eating passerine, the Chaffinch Fringilla coelebs, consuming seeds placed on the substrate surface in three different treatments: bare soil, low density stubble and high density stubble. Stubble presence significantly reduced feeding rates, but there was no significant difference between the two stubble treatments. Stubble reduced feeding rates by reducing the maximum attack distance, i.e. the distance over which an individual food item is targeted and consumed. The searching speed, handling time per seed, proportion of time spent vigilant, duration of vigilance bouts and duration of head-down search periods were unaffected by the presence of stubble. The frequency of vigilance bouts was higher in the bare soil treatment, but this is likely to be a consequence of the increased feeding rate. We show the influence of a key habitat type on the functional response of a seed-eating passerine, and discuss the consequences of this for farmland bird conservation.
|Item Type:||Publication - Article|
|Digital Object Identifier (DOI):||10.1111/j.1474-919X.2009.00941.x|
|Programmes:||CEH Topics & Objectives 2009 - 2012 > Biodiversity > BD Topic 1 - Observations, Patterns, and Predictions for Biodiversity|
|Additional Keywords:||agriculture, Chaffinch, foraging behaviour, Fringilla coelebs, stubble|
|NORA Subject Terms:||Biology and Microbiology
Ecology and Environment
|Date made live:||13 Oct 2009 14:22|
Actions (login required) | <urn:uuid:27ad4be1-8ed0-4686-9da8-3aadd60161b6> | CC-MAIN-2015-35 | http://nora.nerc.ac.uk/8265/ | s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644065318.20/warc/CC-MAIN-20150827025425-00003-ip-10-171-96-226.ec2.internal.warc.gz | en | 0.875835 | 553 | 2.703125 | 3 |
The Earth at night is a blaze of lights. Cities, towns, natural gas fields, and even fishing fleets send their lights into the sky and off into space, announcing our presence to the universe.
This is how the continents of North and South America appear from space: lights from coast to coast. The large cities appear as sprawling jewels in the night.
(Visit http://www.ngdc.noaa.gov/dmsp/download_iss_movies.html to see a movie showing many cities from space.)
While we may gain from lighting the night, we also lose some things: particularly the night sky. The brighter the lights on the ground around us, the less we see of the stars and the more we lose connection with the universe around us. Consider this computer simulation of the loss of visibility of the night sky as city lights grow brighter around us.
Astronomers are particularly affected by increasing lights. Most of the objects astronomers are interested in observing with GSMT are very faint, such as galaxies at the edge of the universe, planets orbiting other stars, or stars just beginning to form. Thus an important requirement for a possible observatory site is a dark sky. While other factors contribute to a dark sky, the main one is the absence of human-made light sources: building lights, factories, airplanes and automobiles. Thus dark skies are found in isolated places far from any city or town. Unfortunately, as the population of the world grows, it gets harder and harder to find dark-sky locations. And sites that were dark in the past are getting brighter as nearby cities get larger.
As an example, when Kitt Peak National Observatory in southern Arizona was started in 1958, nearby Tucson had a population of only about 100,000 people. As seen from the top of Kitt Peak (see figure below), Tucson’s lights were fairly dim and far away. But the lights became more prominent as Tucson grew to nearly a million people today, despite one of the world’s most progressive light-reduction city ordinances.
The lights surrounding Kitt Peak today are most clearly seen from space. The night-time satellite image below, centered on Kitt Peak, shows the lights of Tucson to the upper left, and the lights from the southern edge of the Phoenix metropolitan area at top center. Other towns can be identified from the matching map in the lower panel.
The lights that can be directly seen from the top of a mountain depends on its elevation – the higher the mountain, the more distant the lights are than can be seen. Kitt Peak is about 7100 feet in elevation, so lights about 100 miles away can be seen, which is about the width of the image north and south from Kitt Peak. The relationship between elevation and the distance to the horizon is shown in the graph below.
Using this figure and maps of night lights from the Defense Meteorological Satellite Program, one can estimate the amount of light visible at a proposed observatory site from its elevation. A quantitative measure of the light can be obtained in the following activity.
For more information about dark skies and their preservation, visit the International Dark-Sky Association. | <urn:uuid:82149281-0ec3-4091-af9d-78eb310a330a> | CC-MAIN-2017-17 | https://www.noao.edu/education/gsmt/lp | s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917122726.55/warc/CC-MAIN-20170423031202-00613-ip-10-145-167-34.ec2.internal.warc.gz | en | 0.948464 | 641 | 4.28125 | 4 |
What is Goal Setting?
Goal setting refers to the process of setting specific, attainable targets for individuals or groups.
Goal setting definition
Goal setting refers to the process of setting specific, attainable targets for individuals or groups. It is a motivational technique which can help the employees to understand the business goals, and motivate them to rise to the challenges.
Goal settings have to be specific, measurable, achievable, realistic and time bound.
Goal setting steps
1. Thinking about the target.
2. Breaking the target into smaller goals.
3. Using the SMART method to create attainable goals.
4. Create some short term positive goals.
5. Set prioroties
6. Keep track of progress.
7. Rewarding accomplishments
Goal setting S.M.A.R.T. technique
A good way to create constructive goals is to use the S.M.A.R.T. technique.
• Specific: clearly states what is to be achieved.
• Measurable: can be quantitatively determined or observed.
• Achievable: within reach given the role and responsibilities.
• Results-oriented: indicating what action is to be performed.
• Time-bound: including a deadline for completion.
Benefits of goal setting
• Defines priorities
• Establishs direction
• Identifies expected results
• Enhances teamwork
• Improves individual performance
• Clarifies expectations
• Connects individual contributions to the overall success of the university | <urn:uuid:13543d2f-4669-423b-9e07-c01e0861b427> | CC-MAIN-2021-17 | https://www.talentlyft.co/en/resources/what-is-goal-setting | s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618039398307.76/warc/CC-MAIN-20210420122023-20210420152023-00285.warc.gz | en | 0.876043 | 312 | 3.6875 | 4 |
Using social and behavioural science to support COVID-19 pandemic response.
Nature human behaviour
The COVID-19 pandemic represents a massive global health crisis. Because the crisis requires large-scale behaviour change and places significant psychological burdens on individuals, insights from the social and behavioural sciences can be used to help align human behaviour with the recommendations of epidemiologists and public health experts. Here we discuss evidence from a selection of research topics relevant to pandemics, including work on navigating threats, social and cultural influences on behaviour, science communication, moral decision-making, leadership, and stress and coping. In each section, we note the nature and quality of prior research, including uncertainty and unsettled issues. We identify several insights for effective response to the COVID-19 pandemic and highlight important gaps researchers should move quickly to fill in the coming weeks and months.
View details for DOI 10.1038/s41562-020-0884-z
View details for PubMedID 32355299
What constitutes a 'successful' recovery? Patient perceptions of the recovery process after a traumatic injury.
Trauma surgery & acute care open
2020; 5 (1): e000427
Background: As the number of patients surviving traumatic injuries has grown, understanding the factors that shape the recovery process has become increasingly important. However, the psychosocial factors affecting recovery from trauma have received limited attention. We conducted an exploratory qualitative study to better understand how patients view recovery after traumatic injury.Methods: This qualitative, descriptive study was conducted at a Level One university trauma center. Participants 1-3years postinjury were purposefully sampled to include common blunt-force mechanisms of injuries and a range of ages, socioeconomic backgrounds and injury severities. Semi-structured interviews explored participants' perceptions of self and the recovery process after traumatic injury. Interviews were transcribed verbatim; the data were inductively coded and thematically analyzed.Results: We conducted 15 interviews, 13 of which were with male participants (87%); average hospital length of stay was 8.9 days and mean injury severity score was 18.3. An essential aspect of the patient experience centered around the recovery of both the body and the 'self', a composite of one's roles, values, identities and beliefs. The process of regaining a sound sense of self was essential to achieving favorable subjective outcomes. Participants expressed varying levels of engagement in their recovery process, with those on the high end of the engagement spectrum tending to speak more positively about their outcomes. Participants described their own subjective interpretations of their recovery as most important, which was primarily influenced by their engagement in the recovery process and ability to recover their sense of self.Discussion: Patients who are able to maintain or regain a cohesive sense of self after injury and who are highly engaged in the recovery process have more positive assessments of their outcomes. Our findings offer a novel framework for healthcare providers and researchers to use as they approach the issue of recovery after injury with patients.Level of evidence: III-descriptive, exploratory study.
View details for DOI 10.1136/tsaco-2019-000427
View details for PubMedID 32154383
- Implementation Challenges Using a Novel Method for Collecting Patient-Reported Outcomes After Injury JOURNAL OF SURGICAL RESEARCH 2019; 241: 277–84
Differences among Asian/Asian American, and Caucasian breast and gynecologic cancer patient reported survivorship needs, symptoms, and illness mindsets (N=220).
AMER SOC CLINICAL ONCOLOGY. 2019
View details for Web of Science ID 000487345804287
Targeting Mindsets, Not Just Tumors
Trends in Cancer
View details for DOI 10.1016/j.trecan.2019.08.001
Implementation Challenges Using a Novel Method for Collecting Patient-Reported Outcomes After Injury.
The Journal of surgical research
2019; 241: 277–84
Monitoring longitudinal patient-reported outcomes after injury is important for comprehensive trauma care. Current methodologies are resource-intensive and struggle to engage patients.Patients ≥18 y old admitted to the trauma service were prospectively enrolled. The following inclusion criteria were used: emergency operation, ICU length of stay ≥2 midnights, or hospital length of stay ≥4 d. Validated and customized questionnaires were administered using a novel internet-based survey platform. Three-month follow-up surveys were administered. Contextual field notes regarding barriers to enrollment/completion of surveys and challenges faced by participants were recorded.Forty-seven patients were eligible; 26 of 47 (55%) enrolled and 19 of 26 (73%) completed initial surveys. The final sample included 14 (74%) men and 5 (26%) women. Primary barriers to enrollment included technological constraints and declined participation. Contextual field notes revealed three major issues: competing hospital tasks, problems with technology, and poor engagement. The average survey completion time was 43 ± 27 min-21% found this too long. Seventy-four percent reported the system "easy to use" and 95% reported they would "very likely" or "definitely" respond to future surveys. However, 10 of 26 (38%) patients completed 3-mo follow-up.Despite a well-rated internet-based survey platform, study participation remained challenging. Lack of email access and technological issues decreased enrollment and the busy hospitalization posed barriers to completion. Despite a thoughtful operational design and implementation plan, the trauma population presented a challenging group to engage. Next steps will focus on optimizing engagement, broadening access to survey reminders, and enhancing integration into clinical workflows.
View details for PubMedID 31042606
Mindsets Matter: A New Framework for Harnessing the Placebo Effect in Modern Medicine.
International review of neurobiology
2018; 138: 137–60
The clinical utility of the placebo effect has long hinged on physicians deceptively administering an objective placebo treatment to their patients. However, the power of the placebo does not reside in the sham treatment itself; rather, it comes from the psychosocial forces that surround the patient and the treatment. To this end, we propose a new framework for understanding and leveraging the placebo effect in clinical care. In outlining this framework, we first present the placebo effect as a neurobiological effect that is evoked by psychological processes. Next, we argue that along with implicit learning and expectation formation, mindsets are a key psychological process involved in the placebo effect. Finally, we illustrate the critical role of the social environment and treatment context in shaping these psychological processes. In doing so, we offer a guide for how the placebo effect can be understood, harnessed, and leveraged in the practice of modern medicine.
View details for PubMedID 29681322
Side effects can enhance treatment response through expectancy effects: an experimental analgesic randomized controlled trial
2017; 158 (6): 1014–20
In randomized controlled trials, medication side effects may lead to beliefs that one is receiving the active intervention and enhance active treatment responses, thereby increasing drug-placebo differences. We tested these hypotheses with an experimental double-blind randomized controlled trial of a nonsteroidal anti-inflammatory drug with and without the addition of atropine to induce side effects. One hundred healthy volunteers were told they would be randomized to either combined analgesics that might produce dry mouth or inert placebos. In reality, they were randomized double blind, double-dummy to 1 of the 4 conditions: (1) 100 mg diclofenac + 1.2 mg atropine, (2) placebo + 1.2 mg atropine, (3) 100 mg diclofenac + placebo, or (4) placebo + placebo, and tested with heat-induced pain. Groups did not differ significantly in demographics, temperature producing moderate pain, state anxiety, or depression. Analgesia was observed in all groups; there was a significant interaction between diclofenac and atropine, without main effects. Diclofenac alone was not better than double-placebo. The addition of atropine increased pain relief more than 3-fold among participants given diclofenac (d = 0.77), but did not enhance the response to placebo (d = 0.09). A chain of mediation analysis demonstrated that the addition of atropine increased dry mouth symptoms, which increased beliefs that one had received the active medication, which, in turn, increased analgesia. In addition to this indirect effect of atropine on analgesia (via dry mouth and beliefs), analyses suggest that among those who received diclofenac, atropine directly increased analgesia. This possible synergistic effect between diclofenac and atropine might warrant future research.
View details for DOI 10.1097/j.pain.0000000000000870
View details for Web of Science ID 000402431700005
View details for PubMedID 28178072
View details for PubMedCentralID PMC5435545
Efficacy and Safety of Selective Serotonin Reuptake Inhibitors, Serotonin-Norepinephrine Reuptake Inhibitors, and Placebo for Common Psychiatric Disorders Among Children and Adolescents: A Systematic Review and Meta-analysis.
2017; 74 (10): 1011–20
Depressive disorders (DDs), anxiety disorders (ADs), obsessive-compulsive disorder (OCD), and posttraumatic stress disorder (PTSD) are common mental disorders in children and adolescents.To examine the relative efficacy and safety of selective serotonin reuptake inhibitors (SSRIs), serotonin-norepinephrine reuptake inhibitors (SNRIs), and placebo for the treatment of DD, AD, OCD, and PTSD in children and adolescents.PubMed, EMBASE, PsycINFO, Web of Science, and Cochrane Database from inception through August 7, 2016.Published and unpublished randomized clinical trials of SSRIs or SNRIs in youths with DD, AD, OCD, or PTSD were included. Trials using other antidepressants (eg, tricyclic antidepressants, monoamine oxidase inhibitors) were excluded.Effect sizes, calculated as standardized mean differences (Hedges g) and risk ratios (RRs) for adverse events, were assessed in a random-effects model.Primary outcomes, as defined by authors on preintervention and postintervention data, mean change data, and adverse event data, were extracted independently by multiple observers following PRISMA guidelines.Thirty-six trials were eligible, including 6778 participants (3484 [51.4%] female; mean [SD] age, 12.9 [5.1] years); 17 studies for DD, 10 for AD, 8 for OCD, and 1 for PTSD. Analysis showed that SSRIs and SNRIs were significantly more beneficial compared with placebo, yielding a small effect size (g = 0.32; 95% CI, 0.25-0.40; P < .001). Anxiety disorder (g = 0.56; 95% CI, 0.40-0.72; P < .001) showed significantly larger between-group effect sizes than DD (g = 0.20; 95% CI, 0.13-0.27; P < .001). This difference was driven primarily by the placebo response: patients with DD exhibited significantly larger placebo responses (g = 1.57; 95% CI, 1.36-1.78; P < .001) compared with those with AD (g = 1.03; 95% CI, 0.84-1.21; P < .001). The SSRIs produced a relatively large effect size for ADs (g = 0.71; 95% CI, 0.45-0.97; P < .001). Compared with participants receiving placebo, patients receiving an antidepressant reported significantly more treatment-emergent adverse events (RR, 1.07; 95% CI, 1.01-1.12; P = .01 or RR, 1.49; 95% CI, 1.22-1.82; P < .001, depending on the reporting method), severe adverse events (RR, 1.76; 95% CI, 1.34-2.32; P < .001), and study discontinuation due to adverse events (RR, 1.79; 95% CI, 1.38-2.32; P < .001).Compared with placebo, SSRIs and SNRIs are more beneficial than placebo in children and adolescents; however, the benefit is small and disorder specific, yielding a larger drug-placebo difference for AD than for other conditions. Response to placebo is large, especially in DD. Severe adverse events are significantly more common with SSRIs and SNRIs than placebo.
View details for PubMedID 28854296
View details for PubMedCentralID PMC5667359
- The Placebo Effect in the Clinical Setting: Considerations for the Pain Practitioner Principles and Practice of Pain Medicine McGraw-Hill Education Medical. 2015; 3: 162–169 | <urn:uuid:66582af9-f32b-4ce1-8849-cad2dcc4efcc> | CC-MAIN-2021-43 | https://profiles.stanford.edu/sean-zion | s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585302.91/warc/CC-MAIN-20211020055136-20211020085136-00322.warc.gz | en | 0.921078 | 2,705 | 2.625 | 3 |
Following the success in the Urgenda Climate Case, citizens around the world are taking their governments to court over their insufficient climate policies. Urgenda also initiated the Climate Litigation Network to support climate cases worldwide.
In November 2015, the New Zealand law student Sarah Thomson took her government to court for its insufficient climate ambitions. The case was heard in court between 26 and 28 June 2017. On 2 November 2017, the High Court in Wellington issuedits ruling. The Court held that climate change presents significant global risks and that the government is legally accountable for its actions to address climate change. The Court determined that the New Zealand Minister for Climate Change had acted unlawfully by failing to review the country’s climate change targets for 2050 after the publication of most recent IPCC Assessment Report. The Court refrained from issuing an order against the government, as the newly elected government took up office in October 2017 and committed itself to a target of CO2-neutrality in 2050. Thomson is considering whether or not to continue her action, depending on the details of the new government’s policy. More information on the case is available here. You can download the statement of claim and the judgement of the court here and here.
On 22 March 2017, nine-year-old Ridhima Pandey filed a petition against the government of India in the National Green Tribunal. Pandey asserts that the Indian government has failed to fulfil its duties to her and the Indian people to mitigate climate change, as it falls short of meeting the emission reduction policies and standards it has set for itself.
In her petition, Pandey asks the Tribunal to order the government of India to prepare a carbon budget and national climate recovery plan in accordance with international agreements and scientific consensus. The petition filed by Pandey is available here.
On 26 May 2017, a group of senior Swiss women (the Klimaseniorinnen, Senior Women for Climate Protection) filed a legal complaint against the Swiss Government (the Federal Council) and three responsible authorities in the Federal Administrative Court. The complaint asserted that the Government’s climate policies are unlawful and violate constitutional and human rights because they fail to limit warming to the politically agreed ‘safe level’. The senior women demanded an immediate increase in the ambition of national mitigation targets for 2020 and 2030.
In November 2018, the Federal Administrative Court dismissed the case, ruling that the women are not particularly affected by the government’s climate change mitigation measures beyond the impact on the general public. The women appealed to the Swiss Supreme Court in January 2019. The court’s decision can be found in German and English here and here, and the appeal filing is here (German). More information about the case can be found here.
On 23 October 2017, Friends of the Irish Environment (FIE) launched a legal challenge against the Government’s failure to take the required action to avert dangerous climate change. FIE claims that the Irish National Mitigation Plan – one of the main planks in the Government’s climate change policy – does not do enough to reduce Ireland’s greenhouse gas emissions and is a violation of Ireland’s Climate Act, the Irish Constitution and human rights obligations. FIE also claims that the Plan falls short of the steps required by the Paris Agreement on climate change. The High Court of Ireland gave permission to proceed with the lawsuit, and the case was heard on 22 January 2019. More information on the case can be found on the Climate Case Ireland website, and news coverage of the hearing is available here. Supporters can sign on the website to tell the Irish Government “This case is also in my name!”
In 2014, 11 concerned Belgian citizens united to challenge the inadequate climate policies of the Belgian government, as well as several regional governments. On 1 June 2015 association Klimaatzaak (Climate Case) filed their statement of claim to the court. Since the start of the case more than 54,000 Belgians joined in the call for more ambitious climate policies and added their names as co-plaintiffs.
One of the regional governments (Flanders) addressed in the case challenged the decision of the Klimaatzaak to file the case in French, effectively blocking the case from being heard on the merits. In early 2018, the Belgian Court of Cassation ruled that the case will proceed in French. A final judgment is expected in late 2020. More information on the case is available here.
In 2015, 21 young people filed a climate change claim against the U.S. government in the District Court of Oregon. In the case, also known as Youth v. Trump, the young Americans claim that for decades their government has actively contributed to causing climate change and that in doing so it has violated the youngest generation’s constitutional rights to life, liberty, and property, as well as failed to protect essential public trust resources.
In November 2016 the youth survived an attempt by the government and fossil fuel industry to have the case thrown out of court at an early stage. In a landmark opinion and order the federal district court of Oregon held that “the right to a climate system capable of sustaining human life is fundamental to a free and ordered society,” rejecting the government’s motions to dismiss the case. Since then, the Trump administration has made several applications to stay the trial, which the Ninth Circuit Court of Appeals and the Supreme Court have repeatedly denied. The government’s latest appeal of the 2016 decision is currently pending before the Ninth Circuit Court of Appeals, leading to a delay of the trial, which had been set for October 2018. In January 2019, the Ninth Circuit Court of Appeals granted the young people’s request to fast-track the government’s appeal. In February 2019, the youth applied for a court order to stop all new fossil fuel infrastructure development until the case is decided. More information on the case can be found here.
In October 2018, Greenpeace Germany, with three German families who are organic farmers, filed a lawsuit against the German government, claiming that the government’s failure to meet its 2020 greenhouse gas reduction target violates the families’ rights to life and health, property, and occupational freedom, as well as European law. They are asking the court to declare that the government is legally obligated to still comply with its 2020 target. The statement of claim (in German) can be found here, and a summary of pleas (in English) here. More information about the case can be found here.
In November 2018 Quebec-based environmental nonprofit, ENvironnement JEUnesse, initiated the first stage of a class action lawsuit against the Canadian government on behalf of all the citizens of Quebec under the age of 35. They argue that the government’s GHG reduction targets are inadequate, and that failing to take aggressive action to avoid catastrophic climate change violates the fundamental rights of young people under Canadian and Quebec human rights charters. In the first stage of the proceedings, ENvironnement JEUnesse must convince the court that it has an arguable case. More information about the case can be found here. The application in French and English can be found here and here.
In December 2018, four nonprofits began the process of filing a climate change claim against the French government, by sending a letter of formal notice. In the letter, Fondation pour la Nature et l’Homme, Greenpeace France, Notre Affaire à Tous, and Oxfam France claim that the government has not done enough to effectively address climate change, and that this has violated a statutory duty to act on climate change. The French government has two months to formally respond to the letter, after which the nonprofits can officially file a case in the Administrative Court of Paris.
The letter of formal notice is here (in French; unofficial English translation here). Supporters of the case can sign a petition on the L’Affaire du Siecle website. More information about the case can be found here.
Climate charity Plan B and 11 members of the public aged 9 to 79 filed a climate change case against the UK Secretary of State for Business, Energy and Industrial Strategy in December 2017. The claimants argued that the UK’s 2050 climate target, set in 2008, was not in line with the Paris Agreement nor with new scientific evidence. They argued that the Secretary of State should legally have to increase the target. In July 2018, the High Court decided not to hold a full hearing of the case, finding that Plan B’s arguments had no prospect of success. Plan B and the 11 members of the public appealed to the Court of Appeals, which rejected the appeal in January 2019, agreeing with the High Court. The order of the Court of Appeals can be found here. More information can be found here.
25 young people from ages 7-25 brought a lawsuit against the Colombian government, several local governments, and a number of corporations. The young people claimed that climate change along with the government’s failure to reduce deforestation and meet its 2020 zero-net Amazon deforestation target threatened their fundamental rights to a healthy environment, life, health, food, and water. In April 2018, the Supreme Court ruled in favour of the young people, recognizing the Colombian Amazon as having its own rights, and ordered the government to make and carry out action plans to address deforestation in the Amazon. The Supreme Court decision (in Spanish) can be found here. More information about the case can be found here.
In May 2018 ten families, including children, filed a climate change case in the EU General Court against the EU Parliament and Council. The families, from Portugal, Germany, France, Italy, Romania, Kenya, Fiji, and the Swedish Sami Youth Association Sáminuorra, claim that the EU’s 2030 climate target is not enough to prevent dangerous climate change nor to protect their fundamental rights of life, health, occupation and property. The applicants argue that the EU target of a 40% reduction in domestic GHG emissions below 1990 levels by 2030 is unlawful and that a greater level of ambition is required. The European Parliament and the Council have argued that the case is inadmissible. A decision by the Court on admissibility is expected. The application can be found here and more information on the case can be found here.
Asghar Leghari, a Pakistani farmer, brought a climate change case against the Pakistani government for failing to implement its national climate change law and policy. In 2015, the Green Bench of the Lahore High Court upheld the claim, invoking the right to life and the right to dignity. Finding that the government had done little to carry out its national climate law, the court directed government ministries to each nominate a focal point to ensure implementation and present a list of action points. The court also created a Climate Change Commission, mandated to monitor the government’s progress. The court decision can be found here.
On 30 March 2015 the Oslo Principles on Global Climate Change Obligations were launched, formulated by an international group of eminent jurists, including High Court judges, law professors and advocates from countries such as Brazil, China, India, the US and the Netherlands. The Oslo principles hold that regardless of the existence of international agreements, governments already have a legal obligation to avert the harmful effects of climate change, based on existing international human rights law, environmental law and tort law.
The Oslo group endorses the arguments that Urgenda brings forward in its climate case and also provides support to initiatives in other countries to involve the courts in their efforts to contain climate change.
On April 8, Dutch daily newspaper Trouw published an extensive interview with Jaap Spier, Advocate-General to the Dutch Supreme Court, concerning the Oslo Principles and the Urgenda climate case. According to Spier, ‘Courts can force countries to adopt effective climate policies. Court cases are perhaps the only way to break through the political apathy about climate change.’
From the article: Does a judge need to be an activist in order to make a statement about climate change? “No”, says Spier, “it is just a matter of applying existing law, although undoubtedly not all judges will be open to this. Judges with the courage to give a ruling on this will one day be applauded, whereas those who don’t will eventually be tarred and feathered.” | <urn:uuid:1076194a-ce54-4822-9d59-6240d2d9c717> | CC-MAIN-2019-13 | https://www.urgenda.nl/en/themas/climate-case/global-climate-litigation/ | s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912201882.11/warc/CC-MAIN-20190319012213-20190319034213-00061.warc.gz | en | 0.956125 | 2,531 | 2.59375 | 3 |
Carl Czerny: (b Vienna, 21 Feb 1791; d Vienna, 15 July 1857) Austrian piano teacher, composer, pianist, theorist and historian. As the pre-eminent pupil of Beethoven and the teacher of many important pupils, including Liszt, Czerny was a central figure in the transmission of Beethoven’s legacy. Many of his technical exercises remain an essential part of nearly every pianist’s training, but most of his compositions – in nearly every genre, sacred and secular, with opus numbers totalling 861, and an even greater number of works published without opus – are largely forgotten. A large number of theoretical works are of great importance for the insight they offer into contemporary musical genres and performance practice.
The primary source of information about Czerny is his autobiographical sketch entitled Erinnerungen aus meinem Leben (1842). In it, he describes his paternal grandfather as a good amateur violinist, employed as a city official in Nimburg (Nymburk), near Prague. Czerny’s father, Wenzel, a pianist, organist, oboist and singer, was born there in 1750, and received his education and a good musical training in a Benedictine monastery near Prague. After marriage, Wenzel settled in Vienna in 1786, where he earned a meagre existence as a music teacher and piano repairman. Czerny, an only child, was born in Vienna in the year of Mozart’s death. He and his parents resided together until his mother’s death in 1827, and his father’s in 1832. He never married, and lived alone for the remainder of his life.
Czerny describes his childhood as ‘under my parents’ constant supervision… carefully isolated from other children’. He began to study the piano with his father at an early age, and by ten was ‘able to play cleanly and fluently nearly everything of Mozart [and] Clementi’. His first efforts at composition began around the age of seven. In 1799, he began to study Beethoven’s compositions, coached by Wenzel Krumpholz, a violinist in the Court Opera orchestra, who introduced him to Beethoven when he was ten. Czerny played for him the opening movement of Mozart’s C major Piano Concerto, k503, the ‘Pathétique’ Sonata, and the accompaniment to Adelaide, which his father sang. Beethoven indicated that he wanted to teach Czerny several times a week, and told his father to procure C.P.E. Bach’s Versuch. Czerny describes the lessons as consisting of scales and technique at first, then progressing through the Versuch, with the stress on legato technique throughout. The lessons stopped around 1802, because Beethoven needed to concentrate for longer periods of time on composition, and because Czerny’s father was unable to sacrifice his own lessons in order to take his son to Beethoven. Czerny neverthless remained on close terms with the composer, who asked him to proofread all his newly published works, and entrusted him with the piano reduction of the score of Fidelio in 1805.
In 1800, Czerny made his public début in the Vienna Augarten hall, performing Mozart’s C minor Concerto k491. He was renowned for his interpretation of Beethoven’s work, performing the First Concerto in C major in 1806, and the ‘Emperor’ in 1812. Beginning in 1816 he gave weekly programmes at his home devoted exclusively to Beethoven’s piano music, many of which were attended by the composer. Apparently he could perform all of Beethoven’s piano music from memory. Although his playing was praised by many critics (‘uncommonly fiery’, according to Schilling), he did not pursue a career as a performer. He made arrangements for a concert tour in 1805, for which Beethoven wrote a glowing testimonial, but although he describes himself at this time as quite proficient as a pianist, sight-reader and improviser, he concedes that ‘my playing lacked the type of brilliant, calculated charlantry that is usually part of a travelling virtuoso’s essential equipment’. For these reasons, in addition to political instability and the modest income of his family, he chose to cancel the tour. He also apparently decided at this point never to undertake the life of a travelling virtuoso, a path that would have made him more widely known as a performer. Instead, he decided to concentrate on teaching and composition.
He spent a good deal of time with Clementi when the latter was in Vienna in 1810, becoming familiar with his method of teaching, which Czerny greatly admired and incorporated into his own pedagogy (His op.822 is entitled the Nouveau Gradus ad Parnassum). In his early teens Czerny began to teach some of his father’s students. By the age of 15, he was commanding a good price for his lessons, and had many pupils. In 1815, Beethoven asked him to teach his nephew, Carl. As his reputation continued to grow, he was able to command a lucrative fee, and for the next 21 years he claims to have given 12 lessons a day, 8 a.m. to 8 p.m., until he gave up teaching entirely in 1836. In 1821, the nine-year-old Liszt began a two-year period of study with Czerny. The teacher noted that ‘never before had I had so eager, talented, or industrious a student’, but lamented that Liszt had begun his performing career too early, without proper training in composition. Czerny also taught Döhler, Kullak, Alfred Jaëll, Thalberg, Heller, Ninette von Bellevile-Oury and Blahetka.
Around 1802, Czerny began to copy out many J.S. Bach fugues, Scarlatti sonatas and other works by ‘ancient’ composers. He describes learning orchestration by copying the parts from the first two Beethoven symphonies, and several Haydn and Mozart symphonies as well. He published his first composition in 1806 at the age of 15: a set of 20 Variations concertantes for piano and violin op.1 on a theme by Krumpholz. Until he gave up teaching, composition occupied ‘every free moment I had’, usually the evenings. The popularity of his first ten opus numbers issued in 1818–19, and of his arrangements of works by other composers, made publishers eager to print anything he would submit, and he earned a substantial amount from his compositions.
The quantity and diversity of Czerny’s compositional output is staggering. He divided his works into four categories: 1) studies and exercises; 2) easy pieces for students; 3) brilliant pieces for concerts; and 4) serious music. As Kuerti (1995, p.7) notes, it is interesting and revealing that he did not regard the ‘brilliant pieces for concerts’ as ‘serious music’. The compositions for piano illustrate the explosion in the number of works published for the instrument at a critical time in its development. In addition to approximately 100 technical studies, Czerny published piano sonatas, sonatinas and hundreds of shorter works, many of which were arranged for piano, four to eight hands. He also published a plethora of works based on national anthems, folksongs, and other well-known songs. Works for other instruments and genres include much symphonic and chamber music, as well as sacred choral music. Mandyczewski’s tabulation of the works remaining in manuscript in the Vienna Gesellschaft der Musikfreunde includes over 300 sacred works. Czerny published approximately 300 arrangements without opus numbers. These works are based on themes from approximately 100 different operas and ballets, plus symphonies, overtures and oratorios by such composers as Auber, Beethoven, Bellini, Cherubini, Donizetti, Halévy, Handel, Haydn, Hérold, Mendelssohn, Mercadante, Meyerbeer, Mozart, Rossini, Spohr, Verdi, Wagner and Weber.
The predominant view of Czerny at the end of the 20th century – of the pedagogue churning out a seemingly endless stream of uninspired works – is that propagated by Robert Schumann in his reviews of many Czerny compositions in the Neue Zeitschrift für Musik (‘it would be hard to discover a greater bankruptcy in imagination than Czerny has proved’, review of The Four Seasons, 4 brillant fantasias op.434). However, Schumann’s rather cavalier dismissal of Czerny was not uniformly shared. During his sojourn in Vienna (1829), Chopin was a frequent visitor at Czerny’s home, and a good deal of correspondence between the two survives. One of Liszt’s letters from Paris to his teacher in Vienna (26 August 1830) describes his performances of Czerny’s Piano Sonata no.1 in A major op.7, and the work’s enthusiastic reception. He urged Czerny to join him in Paris. Liszt’s high regard is again seen in his inclusion of Czerny as one of the contributors to his Hexaméron, the Grand Variations on the March from Bellini’s I puritani, arranged by Liszt, and including variations by Chopin, Czerny, Herz, Liszt, Pixis and Thalberg, composed in 1837. Perhaps even more striking and challenging is Kriehuber’s famous portrait (1846), which depicts, assembled around Liszt at the piano (in addition to a self portrait of the painter), Berlioz, Czerny and the violinist Heinrich Ernst, who was regarded as one of the greatest virtuosos of the 19th century. All are lost in the Romantic reverie evoked by Liszt’s performance. Perhaps this symbolizes Beethoven’s spirit as transmitted by Czerny to Liszt, Berlioz and Ernst.
Czerny’s complete schools and treatises combine sound pedagogy with remarkable revelations about contemporary performing practices, and present a detailed picture of the musical culture of the day. He assigned prominent opus numbers to his four most ambitious instructional works. In the Fantasie-Schule, opp.200 and 300, he uses stylized models and what he terms a ‘systematic’ approach to improvising preludes, modulations, cadenzas, fermatas, fantasies, potpourris, variations, strict and fugal styles and capriccios. His Schule des Fugenspiels, op.400, comprising 12 pairs of preludes and fugues, is intended as a study in multi-voiced playing for pianists. His most substantial work, the Pianoforte-Schule, op.500, covers an extraordinary range of topics, including improvisation, transposition, score reading, concert decorum and piano maintenance. The fourth volume (added in 1846) includes advice on the performance of new works by Chopin, Liszt and other notable composers of the day, as well as on Bach and Handel, and Czerny also draws on his reminiscences of Beethoven’s playing and teaching. In his last major treatise, the Schule der praktischen Tonsetzkunst, op.600, he returns to the models of form and descriptions of style first expounded in his op.200, but here uses them for the instruction of composers.
Czerny’s works reveal, in addition to the familiar pedagogue and virtuoso, an artist of taste, passion, sensitivity, drama, lyricism and solitude. Douglas Townsend sees in the four-hand sonata in C minor op.10 (Sonata sentimentale) a fine example of the composers who straddled the classical tradition and early romanticism. Kuerti (1995, p.491) has described the Third Sonata in F minor op.57 as ‘outstandingly original’; because it is in the same key and carries the same opus as Beethoven’s ‘Appassionata’, Kuerti suggests that Czerny may have been challenging his former master to a duel in the work. Townsend describes the Concerto in C major for piano four hands and orchestra, op.153 as ‘an interesting example of the late classical piano concerto combined with the emerging bravura piano technique of the mid-nineteenth century’. Certain of the exercises stand as fine compositions in their own right, such as some of the character pieces found in the Left Hand Etudes, op.718, and the Art of Finger Dexterity, op.740.
Czerny’s will (published in Dwight’s Journal of Music, 15 August 1857) details the sizable fortune he had amassed from his published works and wealthy pupils. He left his considerable library to the Gesellschaft der Musikfreunde.
by STEPHAN LINDEMAN (with GEORGE BARTH)
from From New Grove Dictionary of Music and Musicians | <urn:uuid:7ddee1c9-8264-4304-9dad-0765ac87bae0> | CC-MAIN-2022-21 | https://davinci-edition.com/product/dv-21732/ | s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662625600.87/warc/CC-MAIN-20220526193923-20220526223923-00220.warc.gz | en | 0.965642 | 3,036 | 3.046875 | 3 |
Passed in 1973, the federal Endangered Species Act (ESA) was much needed. Before then, we have given little regard to the damage that we were doing to fish and wildlife through pollution, habitat destruction, and overharvest. The gray wolf, the shortnose sturgeon, the whooping crane, and the American crocodile are but a few of the species brought back from the brink.
But soon environmental activists discovered that they could use the act to impose preservationist agendas, under the guise of saving endangered species. They started suing the federal government to force action.
As a result, the ESA now has become a polarizing force, as examples abound of the federal government abusing its power to seize and/or deny use of privately owned lands and waters. Sadly, some property owners even practice “shoot, shovel, and shut up” as a means of protecting themselves.
And now the environmentalists, financed by Pew Charitable Trusts, want to use the same tactic to restrict fishing by imposing “ecosystem-based fisheries management.” It’s simply the ESA by another name, with the focus on our waters.
The Recreational Fishing Alliance reports this Pew strategy:
“Ecosystem-based fisheries management could ensure the long-term health of our fisheries and the communities that depend on them for recreation, employment, and nutrition," with environmental advocates describing the vague term as a system to "account for the protection of important habitats, consider the critical role of prey, or forage fish, in the food web, and reduce the waste of non-target species through bycatch."
And in response, Jim Donofrio, executive director of the Recreational Fishing Alliance, says this:
"Pew Charitable Trusts wants ecosystem protections put into the federal fisheries law. That way they've got a legal argument to sue and settle for increased fisheries restrictions.
"Under such a nebulous ecosystem definition, Pew and their partners would then have a legal challenge to close down any recreational fishery they choose by claiming the need to protect sea lice, spearing, oyster toads, undersea corals, even jellyfish."
In May, Pew will hold a forum for Connecticut anglers in what RFA calls the “Hijacking America” tour.
“The Pew script explains how ecosystem plans should be created and implemented across our coasts to further integrate ecosystem considerations into management, while appealing for support for incorporating ecosystem-based fishery management policies into federal law by way of changes to MSA (Magnuson-Stevens Act). Event organizers are hyping ecosystem-based management as yet another ‘new approach’ to fisheries management in their war on recreational fishing,” RFA says.
Go here to learn more about this and how Pew, according to RFA, is trying to recruit recreational anglers “willing only to speak positively about federal fisheries management policies that have denied anglers access to healthy, rebuilt stocks like summer flounder, black sea bass, and porgy.” | <urn:uuid:77f07aa4-aa41-4a63-a763-327443c3e06c> | CC-MAIN-2014-41 | http://www.activistangler.com/journal/tag/recreational-fishing-alliance | s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657124356.76/warc/CC-MAIN-20140914011204-00131-ip-10-196-40-205.us-west-1.compute.internal.warc.gz | en | 0.945928 | 631 | 2.90625 | 3 |
Recommended Operating Systems and Hardware
- Operating system: Windows XP, Vista, Win7, Win8, or mac OSX (how to find which version of windows you are running) (which MacOS?)
- Audio: sound card and speakers or headphones for listening
- Internet connection: Broadband (1/2 MB) (you should be able to find this information on your Internet bill or do a speed test)
- Screen resolution: at least 800 x 600 (what is your screen resolution?)
- Internet browser: IE 7 or greater, Firefox 2 or greater, browser set to accept cookies and to show the newest version of a page (which browser are you using?)
- All of those items recommended in minimum specifications, PLUS the following:
- Internet connection: Cable modem, DSL or better (1 MB or higher recommended for high-quality video)
- Screen resolution: 1024 x 768
Most browsers are suitable for viewing the content in our online learning area. Internet Explorer 7 or above, Firefox 2x or above, Google Chrome, or Safari for the Mac are all suitable and tried and tested browsers. You will need to ensure that various plugins are also up-to-date (see below). We recommend updating your browser to the latest version. If you are not sure whether your browser and plugins are up to date you can visit the respective websites to check and download as appropriate.
Mozilla Firefox (recommended): http://www.mozilla.org/en-US/firefox/new/
Firefox plugin check: http://www.mozilla.org/en-US/plugincheck/
Google Chrome: http://www.google.com/chrome/intl/en-GB/landing_tv.html
If you are using Google Chrome it should update all its essential plugins and extensions automatically.
Microsoft Internet Explorer: http://windows.microsoft.com/en-GB/internet-explorer/products/ie/home
Flash Player: http://get.adobe.com/flashplayer/
Adobe Reader: http://www.adobe.com/uk/products/reader.html
For those using Apple Mac – your computer should update automatically but should you need to, you can download the latest version of Safari here: http://www.apple.com/safari/download/
- Make sure that the browser is set to accept cookies (from both 1st party and 3rd party).
- Make sure your popup blocker is either disabled or allows popups from the online learning area. | <urn:uuid:62621608-373a-41e0-8ab2-7c4e7f2cb37e> | CC-MAIN-2017-51 | http://www.scas.org.uk/training/technical-requirements-online-courses/ | s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948592846.98/warc/CC-MAIN-20171217015850-20171217041850-00686.warc.gz | en | 0.80788 | 527 | 2.796875 | 3 |
Most fatal fires happen right where you think you are the safest — in your own home, according to Jennifer Mieth, public information officer for the Massachusetts Department of Fire Services.
Many break out at night when people are asleep, Mieth said, and “your sense of smell goes to sleep when you do.’’
As you prepare your home and yard for winter, waiting until the last minute to turn on the heat, there are things you can do to keep you and your family safe from fire. There were 14,173 residential-building fires last year, with 44 deaths, according to the Department of Fire Services. That represented 84 percent of all structure fires reported in the state and a 2 percent decrease from 2015. It was the lowest number reported since 2008, but we can do better.
“There are a lot of things people don’t know and a lot of misconceptions,’’ said Susan McKelvey, communications manager for the National Fire Protection Association. “There are so many things we can do to dramatically reduce the risk of fire — sometimes things people don’t think about.’’
We asked Mieth, McKelvey, and the state’s fire marshal, Peter J. Ostroskey, for fire-prevention tips:
Check your alarms
“It is important to have working smoke alarms,’’ Mieth said. “People that have battery-operated alarms [need to] replace the batteries twice per year to make sure they are working.’’
Smoke alarms last only 10 years, she added; after that the sensor deteriorates.
The National Fire Alarm Code requires a smoke alarm in every bedroom, on every level of the home, and in any area other where people sleep.
“There is a sense of overconfidence at home,’’ Mc-Kelvey said. “People don’t think fire will happen to them, so they don’t take it seriously.’’
Don’t forget to have working carbon monoxide detectors on every floor, too.
Make an escape plan
“The key is to have two ways out’’ of each room, Ostroskey said, “so you have the best chance if one is impeded for some reason.’’
Today’s homes burn a lot faster because so much furniture is made of synthetic materials, Mieth said. Home fires double in size every minute, and they emit toxic gases, she said. (Watch a video shot by the Brockton Fire Department in 2003 that shows how quickly fire can spread. Click here.)
When making an escape plan, McKelvey said, ensure all exits are clutter-free, and designate a meeting place outside, in front of your home.
Once you are outside, do not go back in for anything, she said.
Be careful cooking
To prevent a fire, “stay in the kitchen when you are cooking,’’ Meith said. If there is a fire, “put a lid on it, turn the heat off, and resist the temptation to move the pot.’’
And never leave the house or go to a bed with a major appliance like a stove or dryer running or Christmas lights or space heaters on, she said.
Keep the electronics to a minimum
“As a general rule, one plug, one outlet,’’ Meith said. “Heavy-duty appliances need to be plugged directly into the wall. Don’t plug an extension cord into a power strip.’’
And check your cords. Make sure that they are not pinched and that nothing is sitting on top of them, Ostroskey said.
Need to charge your phone for work in the morning? “Never leave a lithium ion battery-powered appliance charging after it is fully charged, so it is best to break the habit of leaving cellphones charging overnight,’’ Mieth wrote in an e-mail. “I have a terrible photo of a fire at Framingham State of a laptop left on the bedclothes, starting a terrible fire, and [I] have seen photos of cellphones under teens’ pillows starting fires. . . . Battery-operated appliances like this generate heat. . . . charge your appliances on noncombustible surfaces.’’
Have a licensed electrician check your home’s wiring every 10 years. Small upgrades and making sure that grounds are secure usually don’t cost a lot, Meith said. “As our electrical usage over time grows, it’s important to have your system keep up. . . . Just as you need a new roof every so often, one should plan to make upgrades to the electrical system.’’
Keep it clean
“We are coming up on heating season, so make sure chimneys, woodstoves, and other fossil-fuel equipment is clean,’’ Ostroskey said. Also, get your gas heaters checked before turning those on.
Be sure to dispose of ashes in a metal container with a lid — not in cardboard boxes, recycling bins, trash barrels, plastic bags, or with other refuse, Mieth said.
Keep an eye on those portable heaters
Establish a 3-foot circle of safety around your space heater, free of anything that can burn, Meith said, and be sure to turn it off before you go to sleep. Avoid using an extension cord; plug any heat-generating appliance directly into a wall outlet.
“Daisy-chaining’’ extension cords was a factor in both fatal space heater fires last year, she said. “Extension cords don’t have the safety of a circuit breaker tripping when overloaded.’’
“Use the proper appliance [to heat your house]; don’t use a stovetop or oven for heat,’’ Ostroskey said.
Store flammables away from the furnace
Keep a 3-foot safe zone around the furnace, free of anything that can burn, Mieth said, adding that paint and chemicals should be stored in a shed or a locked garage. Gasoline, however, should not be stored in an attached garage.
And what kind of fire extinguisher should you have on hand?
“I do not recommend fire extinguishers,’’ except as a way to help you escape, Mieth said. “Most people are not trained to use extinguishers, [and] most home extinguishers are not recharged periodically, so you don’t know if they’ll work.
“It is a contradictory message to get out and stay out,’’ she said. | <urn:uuid:27fcddfb-a7bb-4fae-8f48-fce02671d203> | CC-MAIN-2019-22 | http://realestate.boston.com/ask-the-expert/2017/10/12/your-life/ | s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232256184.17/warc/CC-MAIN-20190521002106-20190521024106-00073.warc.gz | en | 0.929211 | 1,436 | 2.640625 | 3 |
What does God think about divorce, incest, polygamy, and bestiality? Over the years, readers of the Bible have used Gen 2:24 as a lens through which to view these issues. More recently, Gen 2:24 has become a “go-to” text on the issue of homosexuality, despite the fact that the passage never mentions the word.
Scholarly views about the meaning of Gen 2:24 vary greatly. One thing that all agree on is that Gen 2:24 is an etiology, a text that explains something. Most scholars also agree that Gen 2:24 was added by an editor working later than the primary author of Gen 2, perhaps to address the needs of a later audience. What is it that Gen 2:24 explains? This is where scholarly opinion begins to diverge.
Some scholars think that Gen 2:24 functions as a normative definition of marriage—though it should also be noted that the word marriage appears nowhere in the text. According to this view, Gen 2:24 defines marriage as being between two people of different genders and from different families, for life, to the exclusion of all others. The thinking goes that by implication this definition excludes any other form of relationship, so that Gen 2:24 tells us that “marriage” between two men or two women, for example, falls outside God’s plans for humankind revealed in creation.
By this logic, one would then also have to agree that no married couple could ever live with the husband’s family (“Therefore a man leaves his father and his mother and clings to his wife…” Gen 2:24), yet we know that in ancient Israel, as in many other cultures, couples did continue to live patrilocally—that is, with the husband’s family, where his inheritance of land would have been.
Other scholars disagree, arguing either that Gen 2:24 has nothing to do with marriage at all or, if it is an etiology about marriage, that it does not intend to provide a normative definition of marriage. In other words, Gen 2:24 is a descriptive explanation (this is what does happen) rather than normative explanation (this is what must happen.)
For those scholars who don’t believe that Gen 2:24 is about marriage, Gen 2:24 explains quite a different phenomenon—the strength of the attraction between human beings. This attraction is the result of the way that God made men and women. Humanity was at first constituted in a single adam, or “earth creature,” but was separated into genders by God in order to solve the problem created by the adam being alone (Gen 2:18). This creative process leads to an attraction between human beings that is so strong that each one must “leave” (the Hebrew word is stronger, more like abandon or forsake) their parents and “cleave” to their mate, to become a new family unit. And remember that one of the primary responsibilities of Israelites was to honor their parents (Exod 20:12), not to abandon them!
The difference between these approaches to interpreting Gen 2:24 is striking. Though some interpret Gen 2:24 as a prescriptive verse, describing how marriage must be and how people must act, others interpret it as an acknowledgement that people do not always form relationships as their parents, or their religious values, would have them do. They may choose a partner of the “wrong’ gender” or ethnicity or religion. The drive to do this is the result of God’s actions in creation. | <urn:uuid:3431055e-76e2-49b3-aed8-368f94ab2bf8> | CC-MAIN-2017-43 | http://bibleodyssey.org/en/passages/related-articles/marriage-and-the-attraction-between-men-and-women-in-genesis-2-24 | s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187824824.76/warc/CC-MAIN-20171021171712-20171021191712-00736.warc.gz | en | 0.979815 | 741 | 2.75 | 3 |
With the continuous development of current science and technology, the application of titanium alloy can not only improve the firmness of its overall construction materials in space technology and marine technology, but also improve the quality of some civil titanium CNC parts, Therefore, it is very important to analyze the characteristics of titanium alloy which is difficult to process and find out how to process in CNC machining.
Problems and Challenges in CNC Titanium Machining
When the hardness of titanium alloy is higher than HB350, it is very difficult to cut. When the hardness is lower than HB300, it is easy to stick and difficult to cut. However, the hardness of titanium alloy is only one aspect that is difficult to be machined. The key lies in the comprehensive influence of the chemical, physical and mechanical properties of titanium alloy on its machinability. Titanium alloy has the following cutting characteristics:
(1) Small deformation coefficient: This is a remarkable feature of titanium alloy cutting, the deformation coefficient is less than or close to 1. The distance of the chip sliding on the rake face is greatly increased, which accelerates the tool wear.
(2) High cutting temperature: because the thermal conductivity of titanium alloy is very small, the contact length between chip and rake face is very short, the heat generated during cutting is not easy to transfer out, and it is concentrated in a small range near the cutting area and cutting edge, so the cutting temperature is very high. Under the same cutting conditions, the cutting temperature can be more than twice as high as that of 45 steel.
(3) The cutting force per unit area is large: the main cutting force is about 20% smaller than that of steel cutting. Because the contact length between chip and rake face is very short, the cutting force per unit contact area is greatly increased, which is easy to cause edge collapse. At the same time, because the elastic modulus of titanium alloy is small, it is easy to produce bending deformation under the action of radial force, which causes vibration, increases tool wear and affects the accuracy of parts. Therefore, the process system should have better rigidity.
(4) Severe cold hardening phenomenon: due to the high chemical activity of titanium, it is easy to absorb oxygen and nitrogen in the air to form a hard and brittle skin at high cutting temperature; at the same time, the plastic deformation in the cutting process will also cause surface hardening. The phenomenon of cold hardening can not only reduce the fatigue strength of parts, but also increase the tool wear, which is a very important feature in cutting titanium alloy.
(5) Tool is easy to wear: after the blank is processed by stamping, forging, hot rolling and other methods, it forms a hard and brittle uneven skin, which is very easy to cause the phenomenon of edge collapse, making the removal of hard skin the most difficult process in titanium alloy processing. In addition, due to the strong chemical affinity of titanium alloy to tool materials, the tool is easy to produce adhesive wear under the conditions of high cutting temperature and large cutting force per unit area. When turning titanium alloy, sometimes the wear of the rake face is even more serious than that of the rake face; when the feed rate f < 0.1 mm / R, the wear mainly occurs on the rake face; when f > 0.2 mm / R, the rake face will be worn; when finishing and semi finishing with cemented carbide tools, VBmax < 0.4 mm is suitable for the wear of the rake face.
Machining hard alloy materials such as titanium alloy requires large cutting force, or high torque spindle. However, the spindle torque of high-speed CNC machining process tools typically used in HEM-HSM machining of light alloy materials such as aluminum alloy is mostly less than 100nm, generally less than 200nm, which does not have the ability of machining hard alloy materials such as titanium alloy with high efficiency.
Generally, only low cutting speed is allowed for machining hard alloy materials such as titanium alloy, that is, only low spindle speed can be used. However, the spindle speed range of high-efficiency high-speed CNC machine tool typically used for HEM-HSM machining of light alloy materials such as aluminum alloy does not meet the requirements of current titanium alloy processing technology. | <urn:uuid:1a623167-b860-42f3-b60c-b4b7c3d4e538> | CC-MAIN-2022-40 | https://pagalsongs.in/t-titanium-cnc-cutting-challenges-characteristics-and-methods/ | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334332.96/warc/CC-MAIN-20220925004536-20220925034536-00317.warc.gz | en | 0.938374 | 895 | 3.0625 | 3 |
The first book of the Mathematics in Action series, Prealgebra Problem Solving, Third Edition illustrates how mathematics arises naturally from everyday situations through updated and revised real-life activities and accompanying practice exercises.
This unique approach helps students increase their knowledge of mathematics, sharpen their problem-solving skills, and raise their overall confidence in their ability to learn. Technology integrated throughout the text helps students interpret real-life data algebraically, numerically, symbolically, and graphically. The active style of this book develops students’ mathematical literacy and builds a solid foundation for future study in mathematics and other disciplines.
This title is also sold in the various packages listed below. Before purchasing one of these packages, speak with your professor about which one will help you be successful in your course. | <urn:uuid:a75f95b5-6e96-46b0-a160-8e8fb384677e> | CC-MAIN-2016-26 | http://www.mypearsonstore.com/bookstore/mathematics-in-action-prealgebra-problem-solving-books-0321692896 | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393093.59/warc/CC-MAIN-20160624154953-00150-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.931015 | 159 | 3.109375 | 3 |
|History of IBM mainframes, 1952–present|
IBM Z is a family name used by IBM for all of its mainframe computers from the Z900 on. In July 2017, with another generation of products, the official family was changed to IBM Z from IBM z Systems; the IBM Z family now includes the newest model the IBM z14, as well as the z13 (released under the IBM z Systems/IBM System z names), the IBM zEnterprise models (in common use the zEC12 and z196), the IBM System z10 models (in common use the z10 EC), the IBM System z9 models (in common use the z9EC) and IBM eServer zSeries models (in common use refers only to the z900 and z990 generations of mainframe).
The zSeries, zEnterprise, System z and IBM Z families were named for their availability – z stands for zero downtime. The systems are built with spare components capable of hot failovers to ensure continuous operations.
The IBM Z family maintains full backward compatibility. In effect, current systems are the direct, lineal descendants of System/360, announced in 1964, and the System/370 from the 1970s. Many applications written for these systems can still run unmodified on the newest IBM Z system over five decades later.
Virtualization is required by default on IBM Z systems. First layer virtualization is provided by the Processor Resource and System Manager (PR/SM) to deploy one or more Logical Partitions (LPARs). Each LPAR supports a variety of operating systems. A hypervisor called z/VM can also be run as the second layer virtualization in LPARs to create as many virtual machines (VMs) as there are resources assigned to the LPARs to support them. The first layer of IBM Z virtualization (PR/SM) allows a z machine to run a limited number of LPARs (up to 80 on the IBM z13). These can be considered virtual "bare metal" servers because PR/SM allows CPUs to be dedicated to individual LPARs. z/VM LPARs allocated within PR/SM LPARs can run a very large number of virtual machines as long as there are adequate CPU, memory, and I/O resources configured with the system for the desired performance, capacity, and throughput.
IBM Z's PR/SM and hardware attributes allow compute resources to be dynamically changed to meet workload demands. CPU and memory resources can be non-disruptively added to the system and dynamically assigned, recognized, and used by LPARs. I/O resources such as IP and SAN ports can also be added dynamically. They are virtualized and shared across all LPARs. The hardware component that provides this capability is called the Channel Subsystem. Each LPAR can be configured to either "see" or "not see" the virtualized I/O ports to establish desired "shareness" or isolation. This virtualization capability allows significant reduction in I/O resources because of its ability to share them and drive up utilization.
List of models (reverse chronological order)
Since the move away from the System/390 name, a number of IBM Z models have been released. These can be grouped into families with similar architectural characteristics.
- IBM z14 ZR1 (3907 series) single-frame mainframe introduced on April 10, 2018
- IBM z14 (3906 series) mainframe introduced on July 17, 2017
- Official IBM z14 mainframe product page
- IBM Redbooks z14 technical guide
IBM System z13
- z Systems z13s (2965 series), introduced on February 17 2016
- z Systems z13 (2964 series), introduced on January 13, 2015
IBM zEnterprise System
The IBM zEnterprise System (zEnterprise), announced in July 2010, with the z196 model, is designed to offer both mainframe and distributed server technologies in an integrated system. The zEnterprise System consists of three components. First is a System z server. Second is the IBM zEnterprise BladeCenter Extension (zBX). Last is the management layer, IBM zEnterprise Unified Resource Manager (zManager), which provides a single management view of zEnterprise resources. The zEnterprise is designed to extend mainframe capabilities – management efficiency, dynamic resource allocation, serviceability – to other systems and workloads running on AIX on POWER7, and Microsoft Windows or Linux on x86.
The zEnterprise BladeCenter Extension (zBX) is an infrastructure component that hosts both general purpose blade servers and appliance-like workload optimizers which can all be managed as if they were a single mainframe. The zBX supports a private high speed internal network that connects it to the central processing complex, which reduces the need for networking hardware and provides inherently high security.
The IBM zEnterprise Unified Resource Manager integrates the System z and zBX resources as a single virtualized system and provides unified and integrated management across the zEnterprise System. It can identify system bottlenecks or failures among disparate systems and if a failure occurs it can dynamically reallocate system resources to prevent or reduce application problems. The Unified Resource Manager provides energy monitoring and management, resource management, increased security, virtual networking, and information management from a single user interface.
Highlights of the original zEnterprise z196 include:
- BladeCenter Extension (zBX) and Unified Resource Manager
- Up to 80 central processors (CPs)
- 60% higher capacity than the z10 (up to 52,000 MIPS)
- Twice the memory capacity
- 5.2 GHz quad-core chips
The newest zEnterprise, the EC12, was announced in August 2012, and included:
- Up to 101 central processors (CPs)
- 50% higher capacity than the z196 (up to 78,000 MIPS)
- Transactional Execution
- 5.5 GHz hex-core chips
- Flash Express – integrated SSDs which improve paging and certain other I/O performance
On April 8, 2014, in honor of the 50th anniversary of the System/360 mainframe, IBM announced the release of its first converged infrastructure solution based on mainframe technology. Dubbed the IBM Enterprise Cloud System, this new offering combines IBM mainframe hardware, software, and storage into a single system and is designed to compete with competitive offerings from VCE, HP, and Oracle. According to IBM, it is the most scalable Linux server available with support for up to 6,000 virtual machines in a single-footprint.
Specific models from this family include:
- zEnterprise BC12 (2828 machine type), introduced on July 23, 2013
- zEnterprise EC12 (2827 series), introduced on August 28, 2012
- zEnterprise 114 (2818 series), introduced on July 6, 2011
- zEnterprise 196 (2817 series), introduced on July 22, 2010
IBM System z10
The IBM System z10 servers supported more memory than previous generation systems and can have up to 64 central processors (CPs) per frame. The full speed z10 processor's uniprocessor performance was up to 62% faster than that of the z9 server, according to IBM's z10 announcement, and included these other features:
- 50% more performance and 70% more usable capacity. The new 4.4 GHz processor was designed to address CPU intensive workloads and support large scale server consolidation on the mainframe.
- Just-in-time capacity and management – monitoring of multiple systems based on Capacity Provisioning and Workload Manager (WLM) definitions. When the defined conditions are met, z/OS can suggest capacity changes for manual activation from a z/OS console, or the system can add or remove temporary capacity automatically and without operator intervention.
Specific models from this family include:
- z10 Business Class (2098 series), introduced on October 21, 2008
- z10 Enterprise Class (2097 series), introduced on February 26, 2008
IBM System z9
In July 2005, IBM announced a new family of servers – the System z9 family – with the IBM System z9 Enterprise Class (z9 EC) and the IBM System z9 Business Class (z9 BC) servers. The System z9 servers offered:
- More flexibility on the enterprise class servers in customizing and sizing the capacity of the general purpose processors (CPs) that reside in the server. The z9 EC servers offered four different sub-capacity settings when run with eight or fewer general purpose processors.
- zIIP engines. The zIIP is designed so that a program can work with z/OS to have all or a portion of its Service Request Block (SRB) dispatched work directed to the zIIP to help free up capacity on the general purpose processor which may make it available for use by other workloads running on the server.
- MIDAW. The Modified Indirect Data Address Word (MIDAW) facility offers an alternative facility for a channel program to be constructed. It is designed to improve performance for native FICON applications that use extended format datasets (including DB2 and VSAM) by helping to improve channel utilization, reduce channel overhead, and improve I/O response times.
- CP Assist for Cryptographic Functions (CPACF) is shipped on every CP and IFL processor in support of clear key encryption. CPACF was enhanced for System z9 processors to include support of the Advanced Encryption Standard (AES) for 128-bit keys, Secure Hash Algorithm-256 (SHA-256), CPACF offers DES, Triple DES and SHA-1.
Specific models from this family include:
- z9 Business Class (2096 series), successor to the z890 and smallest z990 models (2006)
- z9 Enterprise Class (2094 series), introduced in 2005, initially as z9-109, beginning the new System z9 line
IBM zSeries family
The zSeries family, which includes the z900, z800, z990 and z890, introduced IBM's newly designed 64-bit z/Architecture to the mainframe world. The new servers provide more than four times the performance of previous models. In its 64-bit mode the new CPU is freed from the 31-bit addressing constraints of its predecessors. Major features of the eServer zSeries family:
- Based on z/Architecture (64-bit real and virtual addresses), as opposed to earlier ESA/390 (31-bit) used in S/390 systems yet emphasizing the backwards compatibility the ESA/390 applications are fully compatible with z/Architecture
- First zSeries Superscalar server (z990) – A superscalar processor allows concurrent execution of instructions by adding additional resources onto the microprocessor to achieve more parallelism by creating multiple pipelines, each working on its own set of instructions.
- Offers up to 32 central processors (CPs) per frame
- Frames can be coupled in up to a 32-frame Sysplex, with each frame physically separated up to 100 kilometers
- Supports the z/OS, Linux on System z, z/VM, z/VSE, and z/TPF operating systems
- Support of multiple I/O channel subsystem – or multiple Logical Channel Subsystem (LCSS). The z990 allows for support of up to four LCSS – offering support for up to 4 times the previous 256 channel limit
- Support for zAAP processors. These specialty processors allow IBM JVM processing cycles to be executed on the configured zAAPs with no anticipated modifications to the Java application(s). This means that deployment and integration of new Java technology-based workloads can happen on the very same platform as heritage applications and core business databases in a highly cost-effective manner
Specific models from this family included:
- z890 (2086 series), successor to the z800 and smaller z900 models (2004)
- z990 (2084 series), successor to larger z900 models (2003)
- z800 (2066 series), entry-level, less powerful variant of the z900 (2002)
- z900 (2064 series), for larger customers (2000)
- IBM Corporation, IBM Mainframes - IBM Z, retrieved 2015-04-20
- Selecting System z operating environments: Linux or z/OS?
- "Mainframe strength: Continuing compatibility". z/OS basic skills information center. IBM. Retrieved 12 October 2012.
- Bannan, Karen. "The zEnterprise EC12 Raises Enterprise Security While Boosting Analytics and Cloud Performance". IBM Systems Magazine. IBM. Retrieved 29 August 2014.
- "z/VM Security and Integrity Resources". IBM. Retrieved 29 August 2014.
- "IBM - KVM for IBM z Systems". IBM. Retrieved 14 March 2016.
- "IBM unveils new cloud-ready mainframe based on single-frame design - IBM IT Infrastructure Blog". IBM IT Infrastructure Blog. 2018-04-10. Retrieved 2018-04-13.
- IBM Mainframe Ushers in New Era of Data Protection
- The enterprise mainframe server – the core of trusted digital experiences
- IBM z14 Technical Guide - A draft IBM Redbooks publication
- IBM Unveils New Mainframe for Encrypted Hybrid Clouds
- IBM. "IBM Launches z13 Mainframe". IBM. IBM. Retrieved 20 April 2015.
- "Introducing the zEnterprise System". IBM zEnterprise System Technical Introduction. IBM. Retrieved 2 October 2012.
- "IBM's mainframe-blade hybrid to do Windows". The Register. Retrieved 12 October 2012.
- "IBM Brings New Cloud Offerings, Research Projects and Pricing Plans to the Mainframe". IBM News Room. IBM. 8 April 2014. Retrieved 2014-07-18.
- "IBM Enterprise Cloud System". IBM System z: Enterprise Cloud System. IBM. 8 April 2014. Retrieved 2014-07-18.
- "IBM Brings New Cloud Offerings, Research Projects and Pricing Plans to the Mainframe". Enterprise Systems Media. 10 April 2014. Retrieved 2014-07-18.
- Taft, Darryl (2014-06-27). "IBM Ships Its First Enterprise Cloud System to Vissensa". eWeek. Retrieved 2014-07-18.
- "System functions and features". IBM System z10 Business Class Technical Overview. IBM.
- Introduction to the New Mainframe. IBM Corporation. March 2011. p. 6.[dead link]
- "Multichip Module Packaging and Its Impact on Architecture" (PDF).
- "IBM's z12 mainframe engine makes each clock count". The Register. Retrieved 14 April 2017.
- Burt, Jeffrey (10 April 2018). "IBM Slims Down Pair of Mainframes for the Cloud". Security. eWeek. Retrieved 2018-04-15.
The z14 Model ZR1 and LinuxONE Rockhopper II put the capabilities of IBM’s Z14 mainframe systems announced last year into an industry-standard 19-inch, single-frame design....
|Wikimedia Commons has media related to IBM Z.|
- IBM Z web site
- Mainframe Ushers in New Era of Data Protection IBM z14 announcement Press Release
- IBM IT Infrastructure web page
- IBM Destination z
- Mainframe Software Support Forum
- IBM Systems Mainframe Magazine
- Z6 microprocessor The follow-on to Z9, by Charles F. Webb of IBM
- IBM Archives: A Brief History of the IBM ES/9000, System/390 AND zSeries | <urn:uuid:9931e373-f8d9-490f-b7bc-b7858e1f39a4> | CC-MAIN-2018-43 | https://en.wikipedia.org/wiki/IBM_System_z | s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583509326.21/warc/CC-MAIN-20181015142752-20181015164252-00429.warc.gz | en | 0.886597 | 3,282 | 2.859375 | 3 |
Christian beliefs about creation
- God created the world in 6 days
- On the 7th day God rested- origin of the Sabbath day
- The world created by God is very good.
- It is ordered, fruitful and contains everything that is needed.
- God created humans on the 6th day.
"Be fruitful and multiply, and fill the earth and subdue it, and have dominion over the fish of the sea and over the birds of the air and over every living thing that moves upon the earth."
Christian belifes about creation
- God took man and put him in the Garden of Eden to till it and keep it.
- Adam and Eve the first humans, job is to look after the garden and all the creatures.
- Free to eat of every tree except the tree of the knowledge of good and evil.
- Eve eats from the tree, then Adam and they are banished from the Garden of Eden.
Humans have a God-given responsibility of being his 'stewards'. i.e they must look after the world on his behalf.
This is shown in both passages, "have dominion over" and "to till and keep it".
Humans are to enjoy the world but within the limits set by God and they should obey his will.
Humans should not exploit the world and all of its natural resources, it doesn't belong to them, they are just looking after it on God's behalf. A problem phrase is "dominion over" which some people might take to mean complete control, however it should be translated as "be responsible for". Therefore humans should look after the environment.
Christian treatment of animals
Some Christians belive that the creation story suggests that humans should be vegetarian, "I give you every seed-bearing plant...and every tree that has fruit with seed in it they will be yours for food" This doesn;t mention using animals for food.
Others argue that this is only one particular part of the bible and in other parts humans eat meat and also sacrifice animals to God.
As long as animals are raised in satisfactory conditions and slaughtered appropriately, there is nothing wrong with eating meat.
Christian treatment of animals
The creation stories show that humans are creatures just like the other animals. Some Christians would therefore agree that hunting, cosmetic testing and medical testing on animals is not the right way to practice stewardship.
However, Christians belive humans have souls which make them individual, able to have a spiritual connection with God and the soul will live beyond death. Animals do not have souls, this could be a reason why humans are considered in some way different to animals.
Some Christians may believe that medical testing on animals is appropriate as it could save a human life which is worth more than that of an animal.
Islamic Creation Story
- All creation was made by Allah, he said 'Be' and it was created.
- He decided what the universe should contain, what laws it should obey and when it should die.
- Two things were capable of chhosing whether to follow Allah's will or not- humans and jinns ( spirits)
- When Alah created humanity he tool 7 handfuls of soil, each a different colour. He moulded humans of out his soil and breathed life into the clay.
- The first 2 humans were Adam and Eve.
- They were told not to eat from the tree of eternity bt they disobeyed Allah and were banished from the heavenly garden of Paradise but were told if they followed Allah and his will they could return at death. Those who didnt would be cast to hell.
Islamic beliefs about creation
- Muslims believe that the world is God's creation and therefore the environment should be treated with respect.
- Allah gave humans the responsibility to care for the world and everything in it.
- Humans must follow Allah's will and treat other people in a way that reflects their equality and humanity.
SHIRI'AH- means 'the straight path' this is the Islamic law that is based on the Qur'an. Muslims believe by following the Shiri'ah they are living life the way Allah wants them to.
UMMAH- is the Muslim community, all Muslim's are equal. All Muslims learn Arabic as it is their common language. The whole of the Ummah is united.
Muslim teaching on the treatment of creation
- The world does not belong to poeple it belongs to Allah. People have the role of KALIFAH. (stewardship)
- Their is a pattern and balance in the world, the FITRAH, which humans should help to maintain.
- On the day of judgement they will be called to account for how well they've done this and will be questioned about their care-taking role.
- In looking after the environment Muslims should ensure that careful use is made of scarce resourses such as water. It shouldn't be wasted.
- Trees should be replanted where others have been cut down
- People should consume less e.g saving fuel and recyling.
Treatment of animals in Islam
- Islam teaches that mercy and compassion should be treated towards every living creature because Allah loves everything he's made.
- Cruelty to animals is absolutely forbidden.
- Beasts if burden should never carry loads that are too heavy.
- Animal sports e.g fighting and hunting, are forbidden.
- People may only take the life of animal if it is for food or for another useful purpose.
- Muslims don't agree with cosmetics testing on animals
- Muslims might accept medical testing if there was no alternative to using animals.
- Muslims don't eat any sort of meat unless it has been killed in the quickest and most painless manner with a prayer in the name of Allah- HALAL SLAUGHTER. ......
Treatment of Animals in Islam
- Every Muslim man should know the correct way to kill an animal (the animal's throat should be cut with a sharp knife)
- People should only eat meat if they know how it has been killed.
- People whp mistreat animals will be anserable to their actions on judgement day.
- Muslims belive that humans have a soul that lives on after death. The soul will be asked questions by two angels : "Who is your Lord?", What is your religion?", "Who is that man sent among you?" | <urn:uuid:98b29e45-6c1a-45c6-9492-80fc2e99ae86> | CC-MAIN-2017-04 | https://getrevising.co.uk/revision-cards/creationstewardship_and_the_environment_3 | s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285244.23/warc/CC-MAIN-20170116095125-00185-ip-10-171-10-70.ec2.internal.warc.gz | en | 0.965429 | 1,315 | 2.59375 | 3 |
Plants prime permafrost soil carbon loss across the Circum-Arctic
Using a new, observation-based model, we show a strong acceleration of soil organic matter decomposition by plant roots ("rhizosphere priming effect") across the northern permafrost area that highlights the importance of fine-scale ecological interactions for large-scale greenhouse gas fluxes.
Permafrost – continuously frozen ground – extends over almost a quarter of the terrestrial northern hemisphere and stores vast amounts of soil organic matter. One of the largest uncertainties of global climate projections is the degradation of this organic matter to greenhouse gases following permafrost thaw, which could further accelerate climate warming – the permafrost-carbon-climate feedback.
In contrast to the wide-spread idea of permafrost environments as hostile to life, most of the northern permafrost area features thriving tundra or forest vegetation that might become even more productive as it becomes warmer. Although it has been known since the 1950s that plants can accelerate soil organic matter decomposition near their roots ("rhizosphere priming effect"), this has not been considered for projections of large-scale greenhouse gas emissions so far.
The priming effect can be easily demonstrated under laboratory conditions; quantifying its magnitude in natural environments however is challenging due to limited understanding of the underlying mechanisms, a multitude of interacting effects, the interdisciplinary nature of the subject and shortage of suitable databases. Together, these challenges have prevented large-scale estimates of the priming effect, and of its importance for the permafrost-carbon-climate feedback.
In 2015, a one-week workshop on priming in permafrost systems at the Climate Impacts Research Centre (Abisko, northern Sweden) brought together a hand-picked group of subject-experts working on plants, priming, and permafrost: ecologists, biogeochemists, plant physiologists, soil scientists and geographers; experimentalists and modelers. We connected over a shared frustration over the limitations of existing laboratory-based studies (some of which our own) that suggest the priming effect as a potential major source of greenhouse gas emissions from permafrost soils – when its large-scale impact had in fact never been quantified. Stimulating interdisciplinary exchange and ideas, the workshop became the starting point of the development of the PrimeSCale model to provide a first estimate of the magnitude of priming across the Circum-Arctic. The development of the model followed three guidelines, (a) to use observational data where possible, (b) to keep the model conservative, i.e. err on the low-priming side, and (c) to keep the model as simple as possible and as complex as necessary. While we initially had a rough back-of-the-envelope calculation in mind, easily generated within a week of work, we soon came to realize that in order to provide any meaningful estimate of priming, the spatial variability of plant and soil properties needs to be considered, including the landscape and the soil depth dimension. It took us more than four years of inspiring discussion to complete the model.
The final model has a spatial resolution of 5 km x 5 km x 5 cm and combines large-scale databases on Circum-Arctic plant and soil properties, model projections on e.g. active layer deepening and gross primary production rates and two new meta-analyses on the magnitude of rhizosphere priming as a function of plant productivity, and on rooting depth distribution patterns of different vegetation types. This allowed us to provide a first estimate of the magnitude of priming across the Circum-Arctic under current and future conditions.
The PrimeSCale model suggests an absolute loss of 40 Pg SOC by the rhizosphere priming effect between 2010 and 2100 (RCP 8.5 climate scenario). This value exceeds current best-estimates of e.g. greenhouse gas emissions by abrupt permafrost collapse (Turetsky et al., 2020), methane emissions from Arctic lakes (Wik et al., 2016) and from the particularly vulnerable East Siberian Arctic Shelf (Shakhova et al., 2014). However, we also emphasize the enormous uncertainties of our priming estimate. The 10-90% confidence interval of 6.0 – 80 Pg SOC loss highlights the poor constraints on many key parameters. In addition, a range of processes and parameters that likely influence priming could not be included in the model due to lack of observational data, such as the impact of frequent anoxia in permafrost soils, of dissolved organic carbon leaching, and of mycorrhization. We see our model as a first, but not last step towards estimating priming-induced greenhouse gas emission over large spatial scales and we hope that it will inspire a new generation of observational studies targeting the key sources of uncertainty.
- Brown, J., Ferrians, O. J. Jr, Heginbottom, J. A. & Melnikov, E. S. Circum-Arctic Map of Permafrost and Ground-Ice Conditions, Version 2 (National Snow and Ice Data Center, 2002).
- Hugelius, G. et al. The Northern Circumpolar Soil Carbon Database: spatially distributed datasets of soil coverage and soil carbon storage in the northern permafrost regions. Earth Syst. Sci. Data 5, 3–13 (2013).
- Shakhova, N. et al. Ebullition and storm-induced methane release from the East Siberian Arctic Shelf. Nat. Geosci. 7, 64–70 (2014).
- Turetsky, M. R. et al. Carbon release through abrupt permafrost thaw. Nat. Geosci. 13, 138–143 (2020).
- Walker, D. A. et al. The Circumpolar Arctic vegetation map. Journal of Vegetation Science 16, 267–282 (2005).
- Wik, M., Varner, R. K., Anthony, K. W., MacIntyre, S. & Bastviken, D. Climate-sensitive northern lakes and ponds are critical components of methane release. Nat. Geosci. 9, 99–105 (2016). | <urn:uuid:2d008c56-3461-429e-a9fd-0804c766abc9> | CC-MAIN-2021-49 | https://ecoevocommunity.nature.com/posts/plants-prime-permafrost-soil-carbon-loss-across-the-circum-arctic | s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358208.31/warc/CC-MAIN-20211127163427-20211127193427-00338.warc.gz | en | 0.89193 | 1,278 | 3.265625 | 3 |
early 15c., "carnal, concerning the body" (in distinction from the spirit or intellect);" mid-15c., "of, affecting, or pertaining to the (physical) senses" (a meaning now obsolete), from Old French sensual, sensuel (15c.) and directly from Late Latin sensualis "endowed with feeling" (see sensuality).
The specific meaning "connected with gratification of the senses" is from late 15c., especially "lewd, unchaste, devoted to voluptuous pleasures." Related: Sensually.
word-forming element meaning "one who does or makes," also used to indicate adherence to a certain doctrine or custom, from French -iste and directly from Latin -ista (source also of Spanish, Portuguese, Italian -ista), from Greek agent-noun ending -istes, which is from -is-, ending of the stem of verbs in -izein, + agential suffix -tes.
Variant -ister (as in chorister, barrister) is from Old French -istre, on false analogy of ministre. Variant -ista is from Spanish, popularized in American English 1970s by names of Latin-American revolutionary movements.
<a href="https://www.etymonline.com/word/sensualist">Etymology of sensualist by etymonline</a>
Harper, D. (n.d.). Etymology of sensualist. Online Etymology Dictionary. Retrieved $(datetime), from https://www.etymonline.com/word/sensualist | <urn:uuid:6c4d39a3-ce75-4b68-88a1-b529128bb621> | CC-MAIN-2023-06 | https://www.etymonline.com/word/sensualist?utm_source=related_entries | s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500288.69/warc/CC-MAIN-20230205193202-20230205223202-00515.warc.gz | en | 0.860202 | 373 | 2.59375 | 3 |
This tricky worksheet will make your little reader think by determining which ending is used by a group of words in a word family.
Just look at the pictures and read the words, and check the ending each word uses! Your child will analyze words and vowel sounds using this spelling worksheet: Short Vowel Sounds "O"!
Note: You will not be billed until your free trial has ended and can cancel at any time. No strings attached. | <urn:uuid:cd73ff88-24fd-4368-a6c8-ab69a0e475a4> | CC-MAIN-2021-43 | https://www.kidsacademy.mobi/printables/l2sounds-o/ | s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323588242.22/warc/CC-MAIN-20211027181907-20211027211907-00048.warc.gz | en | 0.933775 | 92 | 3.671875 | 4 |
Arts & Crafts Teacher Resources
Find Arts & Crafts educational ideas and activities
Showing 61 - 80 of 11,021 resources
Forced to Flee: Famine and Plague
Students examine facts about the Irish Potato Famine and explore primary resources, such as newspaper articles, photographs, songs, and poems, related to the famine. Once research is complete, they create a small collage of their...
6th - 8th Social Studies & History
Will Global Warming Push Trees to Extinction?
Students create a photo-collage. They combine different pictures into one composition. Students discuss and analyze the nature and mood of collage-making. They write their ideas and reflections of their original artwork.
4th - 8th Visual & Performing Arts
Holiday Paper Projects for Kids
Students construct several holiday craft projects. In this holiday art activity, students are instructed on how to complete several paper projects such as a 4th of July flags, a Thanksgiving place mat and holiday cards. All of these...
Pre-K - 2nd Social Studies & History
Project Organizer: Follow an Explorer
This is both a great idea and a great way to help your class organize a themed project. They use these worksheets to assist them in writing a creative historical narrative about the life and travels of an explorer. They'll compare and...
5th - 7th English Language Arts
Am I Taller than an Antelope
Learners investigate biology by examining body sizes of different animals. In this antelope measurement lesson, students research the physicality of Antelopes and other large animals that inhabit Earth. Learners create a model Antelope...
K - 2nd Visual & Performing Arts
Picture Collage Book Report: Voltaire's Candide
Here's an alternative to a traditional book report for your class to demonstrate that they understand and can articulate the main character's evolution and the social themes presented in Voltaire's satirical novel Candide. Your young...
9th - 12th English Language Arts | <urn:uuid:0ac7bd6b-fb79-4d19-88c9-0e441d3eb48b> | CC-MAIN-2014-52 | http://www.lessonplanet.com/lesson-plans/arts-and-crafts/4 | s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802774464.128/warc/CC-MAIN-20141217075254-00021-ip-10-231-17-201.ec2.internal.warc.gz | en | 0.921225 | 416 | 3.46875 | 3 |
27 January 2017
A new estimate of badger setts indicates the number of badger social groups, although not the number of badgers in them, nor whether group sizes have changed between various surveys.
This study is a welcome addition to knowledge of an important aspect of ecology. It was done by trained professional surveyors and the work was funded by the Department of the Environment, Food and Rural Affairs. Bristol University used volunteers for two earlier studies in the 1980s and ‘90s for the People’s Trust for Endangered Species.
The authors say: “The general relationship between social group abundance and population size has not been established”. They estimate that since 1985-88 the number of social groups has increased by 103 per cent in England, but has remained relatively constant in Wales.
The Badger Trust says that if there has been a significant increase in badger numbers, it is to be celebrated as a result of the legal protection from the persecution that badgers have experienced in the past. Badger populations are naturally limited by their food supply and the population in England and Wales is returning to an equilibrium. The Protection of Badgers Act is doing its job preventing local extinction in some areas.
In the context of controlling cattle TB, the science has shown that there is no justification for killing badgers.
Density and abundance of badger social groups in England and Wales in 2011–2013.
Johanna Judge, Gavin J. Wilson and Richard J. Delahay of the National Wildlife Management Centre, Animal Health and Veterinary Laboratories Agency, Woodchester Park, Gloucestershire, Roy Macarthur Food and Environment Research Agency, Sand Hutton, York, and Robbie A. McDonald, Environment and Sustainability Institute, University of Exeter, Penryn, Cornwall.Jack Reedy
0775 173 1107 | <urn:uuid:e581bad3-4682-4d55-b7c0-5eff2c005e6a> | CC-MAIN-2023-14 | https://brianmay.com/brian-news/2014/01/press-release-estimate-of-badger-sett-numbers-welcome/ | s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949093.14/warc/CC-MAIN-20230330004340-20230330034340-00246.warc.gz | en | 0.94128 | 403 | 3.140625 | 3 |
The world is a maximalist place; it is made of big things. Tall buildings and bridges tower over you. You will see stacks of shipping containers in ports. Telecommunications towers are all over the world. When you stare at these things, you might ask how people build them or stack the pieces on each other. Clearly, this is the job for professionals.
The professionals dealing with lifting loads, pulling them up, and placing them in their designated places are called riggers. These people specialize in equipment and machinery that use ropes, pulleys, and items that can be maximized for lifting. More often than not, riggers use heavy equipment, such as giant pulleys and cranes.
Riggers are professionally trained to carry out these jobs. The work they are pursuing can be dangerous, so it is a must that riggers also have great knowledge of safety. If you wish to know more about these people, who are most likely employed by rigging companies in Florida, here are some of the things you may want to look at:
The Main Job of Riggers
The services of riggers are mainly used in the construction industry. They are often hired to lift heavy objects and equipment when building commercial structures and residential properties. Riggers are also assigned with taking the equipment to the higher floors of a construction project—so if you see cranes at top of a building, a rigger has probably made it possible. Riggers also transport heavy items, such as platforms, container vans, and forklifts. It is also common to see riggers in the following industries: military, drilling, logistics, and entertainment, particularly in movies.
Job Description Simplified
Riggers do not just lift heavy objects in an instant. Their job is so intensive that it needs painstaking planning and careful considerations, especially if the job will be carried out in a place surrounded by people. The job of a rigger starts with a thorough assessment of the load to be carried, which includes the inspection of its weight and size. Once they are through with the examination, they will assembly the lifting system and use a combination of materials and equipment, such as ropes, cranes, and pulleys.
While doing all these things, riggers are expected to stay within the bounds of certain safety regulations. Some riggers may specialize in inspections and repairs of the material.
The Qualities of a Reliable Rigger
Responsible riggers are good communicators. They are supposed to simplify instructions so that directing the movement of cranes and lifting equipment will be efficient and safe. They are also good planners, often focusing on the construction of the rigging system and coming up with contingency plans. Besides these qualities, riggers should not be afraid of heights.
Alongside architects, engineers, and masons are the riggers who make lifting heavy loads possible. Knowing what these professionals do will make you appreciate how the world is built. If you are inspired by what these people do, you may even consider taking this career path, knowing that a lot of industries need them. | <urn:uuid:f87ff3df-b639-4118-afc4-da9d803d7eea> | CC-MAIN-2023-50 | https://meredisciple.com/do-you-even-lift-the-importance-of-working-with-a-rigger/ | s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100710.22/warc/CC-MAIN-20231208013411-20231208043411-00128.warc.gz | en | 0.971651 | 621 | 2.5625 | 3 |
The Sierra Leone Telegraph: 19 February 2013
Neither: And the answer is?
C: Pregnancy and Childbirth
The sad truth is that somewhere in Africa, one woman dies every minute of every day, from causes related to pregnancy and birth.
But, it seems the hardest pill to swallow for even the most successful African nations is that: giving life to the continent’s next generation is one of the biggest killers of Africa’s women.
More often than not it is preventable: Uncontrolled bleeding, infection, poor medical care and a lack of education, still sit at the very heart of this hidden crisis.
Those who survive may still suffer. For every woman who dies during childbirth, it is estimated that another 30 are injured or become sick bringing life to the world. Africa’s poorest are the most vulnerable.
Too many babies also die unnecessarily. In Africa, over a million newborn die each year – that is – nearly four in every single minute.
If Africa is to advance, MORE needs to be done – SIGNIFICANTLY more.
Yesterday, 18th February 2013, MamaYe (http://www.mamaye.org), a public action campaign to save the lives of mothers and babies, was launched in five countries, most affected by the crisis of maternal and newborn mortality: Nigeria, Ghana, Sierra Leone, Malawi and Tanzania.
This is the first part of a continent-wide campaign, which will use digital and mobile phone technology to engage ordinary Africans in the most important fight of all – the battle to save mothers and babies.
MamaYe is a campaign initiated by Evidence for Action (E4A), a multi-year programme, which aims to improve maternal and newborn survival in sub-Saharan Africa.
Funded by the UK Department for International Development (DFID), the campaign focuses on using a strategic combination of evidence, advocacy and accountability to save lives in Ghana, Malawi, Nigeria, Ethiopia, Sierra Leone and Tanzania.
MamaYe aims to educate and encourage communities to take collective and individual action for pregnant mothers amongst them. It will seek to overcome the ingrained belief that responsibility for maternal and newborn survival rests elsewhere: with ‘the government’, ‘the ministry’, ‘professionals’, ‘the UN’ or foreign donors.
The active participation of Africans as a whole is a critical ingredient for success.
MamaYe believes that technology can educate, motivate and mobilise people to take direct action to respond to the maternal and newborn crisis in Africa.
By 2016, it is projected that there will be one billion mobile phones in Africa; 167,335,676 internet users and 51,612,460 Facebook subscribers.
In Ghana, for example, mobile penetration in the country has reached a record 80% of the country’s population.
“We all have the power and the potential to save the lives of mothers and newborns. Men who support their wives to visit ante-natal clinics are helping to save lives. Taxi drivers who volunteer to get women to clinics in time for the birth can do the same. Voluntarily giving blood also saves lives, by helping women who haemorrhage during childbirth.
“Government officials that ensure clinics are well stocked with drugs and other essentials, are nothing less than life-savers. Midwives that respond to a crisis in the middle of the night are maternal survival heroines.
“We can all play our part. Childbirth is not a disease. We have known for decades what it takes to ensure the survival of women and babies in childbirth. But if our mothers are to survive, then the African public must also step up, take responsibility and become more involved and vigilant.
“MamaYe will provide the evidence, information and tools necessary to empower our citizens to demand change.
“All it takes to make the change is YOU. “
You can be a part of that change. Visit http://www.mamaye.org to find out more about making a life-saving change for mothers and babies of Africa.
At this website you will find easy to understand evidence, stories of heroes and heroines, commitments made by government and different actions you can take for this important cause.
You can make your voice heard and demand more by joining the MamaYe campaign at:
You can also contact Rachel Haynes. Email: [email protected]
And for in-country contacts – see below:
Ghana: Nii Sarpei, Communicatons: [email protected]
Malawi: Mwereti Kanjo, Communications: [email protected]
Nigeria: Morooph Babaranti, Communications: [email protected]
Sierra Leone: Fatou Wurie, Communications: [email protected]
Tanzania: Chiku Lweno-Aboud, Communications: [email protected]
You can also get engaged through the following web and social media platforms:
Pan Africa: http://www.mamaye.org | Facebook.com/MamayeAfrica | Twitter.com/MamaYe
Ghana: http://www.mamaye.org.gh | Facebook.com/MamayeGH | Twitter.com/MamayeGH
Malawi: http://www.mamaye.org.mw | Facebook.com/MamaYeMalaw Twitter.com/MamaYeMW
Nigeria: http://www.mamaye.org.ng | Facebook.com/MamaYeNigeria Twitter.com/MamaYeNigeria
Sierra Leone: http://www.mamaye.org.sl | Facebook.com/MamaYeSL Twitter.com/MamaYeSL
Tanzania: http://www.mamaye.or.tz | Facebook.com/MamaYeTZ | Twitter.com/MamaYeTZ
- In sub-Saharan Africa, the lifetime risk of maternal death is 1 in 16, compared with 1 in 2,800 in developed countries.
- Those who survive may still suffer. For every woman who dies during childbirth, it is estimated that another 30 are injured or become sick bringing life to the world.
- Every day, 444 women die in sub-Saharan Africa due to causes relating to pregnancy and childbirth.
- In Africa, over a million newborns die each year.
- The newborn mortality rate is 44 deaths per 1000 live births in Africa.
- Globally, the countries with the highest rates of newborn mortality are mostly in sub-Saharan Africa.
- (Source: World Health Organization.) | <urn:uuid:8f4a11f3-7ce9-48e5-bef7-ae131fb0c393> | CC-MAIN-2017-47 | http://www.thesierraleonetelegraph.com/what-kills-one-african-woman-every-minute-of-every-single-day/ | s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934803906.12/warc/CC-MAIN-20171117185611-20171117205611-00245.warc.gz | en | 0.896994 | 1,420 | 2.53125 | 3 |
A novelty in the sphere of 3D printing – a tattooing robot
French engineers Johan Da Silveira and Pierre Emm created a tattooing robot Tatoue using 3D printer. The machine looks quite scarily but the results of work are pleasantly surprising.
Today Tatoue is able to reproduce only simple drawings on skin. It can make neat and straight contours which is usually a problem for human masters.
Before the robot starts working a scanned area of skin is uploaded into the program. Then a 3D model of the surface is combined with the drawing and the robot comes into action. Before starting the machine it is important to fasten the body part with straps to avoid unpleasant consequences. | <urn:uuid:27db614c-dfde-44c3-860b-2e803db11e36> | CC-MAIN-2017-26 | https://robot-ex.ru/en/article/novinka-v-sfere-3d-pechati-robot-tatuirovshchik-54261 | s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320763.95/warc/CC-MAIN-20170626133830-20170626153830-00660.warc.gz | en | 0.932829 | 144 | 2.53125 | 3 |
More from my interview with Mathew Block, who asks how God uses our human imaginations to reach us.
CL: How does God use imagination to address humanity?
We believe in the Incarnation—that God became flesh. The Second Person of the Trinity became flesh, became a human being and dwelt among us [John 1:14]. God makes Himself tangible. He becomes one of us so that we can know Him. We also believe in His cross—that Jesus came and took into Himself the sins and sorrows of the world. He physically died for them and physically rose from the dead. The central tenet of Christianity then is about something very tangible.
Even in Christian worship, that message is repeated and proclaimed in physical, tangible ways. In the waters of baptism, Christ makes us part of His body. In the bread and wine, we receive His body and His blood. He gives us His Spirit, yes, but He also gives us His body and His blood. These things—water, bread, and wine—are tangible. And as tangible things, they address us in our imagination. It’s understandable then why so much of Christian art draws on the Sacraments. They address our imaginations, and are a means of reaching us on a very deep level.via Canadian Lutheran Online » Blog Archive » The Christian Imagination: An Interview with Gene Veith.
I would add that God addresses our imagination by giving us His Word to read and hear, through human language. Reading engages our imagination more than anything else, since when we read narratives, descriptions, figures of speech, etc., we picture them in our minds, using the faculty of imagination. So God communicates with us, addressing all of our faculties, by giving us a Book. | <urn:uuid:6eb66a09-8bb8-48f8-81a7-eb3cd2442f48> | CC-MAIN-2018-09 | http://www.patheos.com/blogs/geneveith/2014/06/how-god-uses-the-imagination/ | s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891817999.51/warc/CC-MAIN-20180226025358-20180226045358-00393.warc.gz | en | 0.968498 | 360 | 2.859375 | 3 |
Beroe'a (Βέροια, also written Βέῤῥοια according to Vossius, Thucyd. 1, 61, the Macedonian for Φέροια), the name of two cities mentioned in Scripture.
1. A city in the north of Palestine, mentioned in 2 Maccabees 13:4, in connection with the invasion of Judaea by Antiochus Eupator, as the scene of the miserable death of Menelaus. This seems to be the city in which Jerome says that certain persons lived who possessed and used Matthew's Hebrew Gospel (De Vir. Illust. c. 3). This city (the name of which is written also Βερόη; comp. Beroansis, Pliny 5, 23) was situated in Syria (Strabo, 16:751), about midway between Antioch and Hieropolis (Ptol. 5, 15), being about two days' journey from each (Julian, Epist. 27; Theodoret, 2 22). Chosroes, in his inroad upon Syria, A.D. 540, demanded a tribute from Beroea, which he remitted afterward, as the inhabitants were unable to pay it (Procop. Bell. Pers. 2, 7; Le Beau, Bas Empire, 9, 13).; but in A.D. 611 he occupied this city (Gibbon, 8:225). It owed its Macedonian name Beroea to Seleucus Nicator (Niceph. Hist. Eccl. 14, 39), and continued to be called so till the conquest of the Arabs under Abu Obeidah, A.D. 638, when it resumed its ancient name, Chaleb or Chalybon (Schultens, Index Geogr. s.v. Haleb). It afterward became the capital of the sultans of the race of Hamadan, but in the latter part of the tenth century was united to the Greek empire by the conquests of Zimisces, emperor of Constantinople, with which city it at length fell into the hands of the Saracens. It is now called by Europeans Aleppo (Hardouin, ad Pliny 2, 267), but by the natives still Halab, a famous city of the modern Orient (Mannert, VI, 1, 514 sq.; Busching, Erdbeschr. V, 1, 285). The excavations a little way eastward of the town are the only vestiges of ancient remains in the neighborhood; they are very extensive, and consist of suites of large apartments, which are separated by portions of solid rock, with massive pilasters left at intervals to support the mass above (Chesney, Euphrat. Exped. 1, 435). Its present population is somewhat more than 100,000 souls (see Penny Cyclopaedia, s.v. Haleb; M'Culloch, Geogr. Dict. s.v. Aleppo; Russel's Nat. Hist. of Aleppo, passim). SEE HELBON.
2. A city of Macedonia, to which the apostle Paul retired with Silas and Timotheus, in the course of his first visit to Europe, on being persecuted in Thessalonica (Ac 17:10), and from which, on being again persecuted by emissaries from Thessalonica, he withdrew to the sea for the purpose of proceeding to Athens (ib. 14, 15). The community of Jews must have been considerable in Beroea, and their character is described in very favorable terms (ib. 11; see Conybeare and Howson, St. Paul, 1, 339). Sopater, one of Paul's missionary companions, was from this place (Βεροιαῖος, Ac 20:4; comp. Beroeus, Liv. 23, 39). Beroea was situated in the northern part of the province of Macedon (Pliny 4, 10), in the district called Emathia (Ptolem. 3, 13, 39), on a river which flows into the Haliacmon, and upon one of the lower ridges of Mount Bermius (Strabo, vii, p. 390). It lay 30 Roman miles from Pella (Peut. Tab.), and 51 from Thessalonica (Itin. Antonin.), and is mentioned as one of the cities of the thema of Macedonia, (Constant. De Them. 2, 2). Coins of it are rare (Rasche, 1, 1492; Eckhel, 2, 69). Beroea was attacked, but unsuccessfully, by the Athenian forces under Callias, B C. 432 (Thucyd. 1, 61). It surrendered to the Roman consul after the battle of Pydna (Liv. 44, 45), and was assigned, with its territory, to the third region of Macedonia (Liv. 45, 29). B.C. 168. It was a large and populous town (Lucian, Asinus, 34), being afterward called Irenopolis (Cellarii Notit. 1, 1038), and is now known as Verria or Kara-Verria, which has been fully described by Leake (Northern Greece, 3, 290 sq.) and by Cousinery (Voyage dans la Macedoine, 1, 69 sq.). Situated on the eastern slope of the Olympian mountain range, with an abundant. supply of water, and commanding an extensive view of the plain of the Axius and Haliacmon, it is regarded as one of the most agreeable towns in Rumili, and has now 15,000 or 20,000 inhabitants. A few ancient remains, Greek, Roman, and Byzantine, still exist here. Two roads are laid down in the itineraries between Thessalonica and Beroea, one passing by Pella. Paul and his companions may have traveled by either of them. Two roads also connect Beroea with Dium, one passing by Pydna. It was probably from Dium that Paul sailed to Athens, leaving Silas and Timotheus behind; and possibly 1Th 3:2 refers to a journey of Timotheus from Beroea, not from Athens. SEE TIMOTHY. | <urn:uuid:50a7aaeb-af15-4dd3-8c63-485e4e7c8ee1> | CC-MAIN-2018-51 | https://www.biblicalcyclopedia.com/B/beroea.html | s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376825495.60/warc/CC-MAIN-20181214070839-20181214092339-00456.warc.gz | en | 0.94411 | 1,331 | 3.09375 | 3 |
Great Game of India for Grand Glorious India was called in the long 2000 year history by many names. Started as an epitaph for the unsuccessful campaign of Alexander who touched the borders of then India, this name gained its popularity with the British occupation of 51% India. From the beginning whether called under the name Clash of Civilizations and portrayed as Greco-Roman civilizational existential fight with Egyptians and Persians or under various names of Crusades (Christian Islamic fight or Orange Wars, Inter-Christian sectarian wars between Protestants and Catholics) the purpose of all wars was to control the fathomless depths of knowledge and resources of India. It was given a glorified name under French King Louis XIV as ‘Geo-politics’ and was defined as “trade is nerve center of economy and war is the only way to enforce safe trade”. Initially it was Greco-Roman religious wars with Egyptians and Persians for control of the natural and human resources of these countries to have a direct access to India. When these countries changed their religion to Christianity and Islam these wars became Crusades. When these countries fought wars within their borders it was for Democracy and when they fought with each other it was the Wars for World Peace. Here the word “World” means those countries that want to control resources of third world countries and more specifically India. This religious character of war continued till the beginning of the 19th century and took the name of World Wars for supposed world peace.
Before the Greeks and Romans set on the path of resource control, majority of the world was following Indian cultural practices starting from social life to sun and nature worship (sometimes also called as Pagan). These socio-cultural practices were shown as the cause of civilizing these resource-rich countries (read occupy them) and use the resources for the conquerors’ benefits. Every war needs massive human and financial resources which were conspicuously absent in the then Europe. So to sustain inter European wars money and men were needed from the neighboring countries or continents. This need became a curse to the multifarious civilizations and their progress as most of them were decimated for this greed of war machine.
There are two aspects of the control of Indian cultural resources worldwide.
- These resources are natural, physical and human resources.
- Most importantly the ‘knowledge sources’ that were part of utilizing these resources which formed the bed rock of the Indian culture.
A model and example was set by the Greeks in utilizing the resources through takeover of the knowledge sources. The method was to take all sources of knowledge (manuscripts, books etc) to the conquering countries, translate them into their own language and then burn all the original sources of knowledge from the countries they were taken, thus reversing the entire civilizational progress. Once translated these texts are called the original works of translators and the same was reintroduced in the conquered countries as massive propaganda tool of mind control, indicating to the losers that they have nothing and whatever science or technology existed it belonged to conquerors. Starting from the burning of the great library of Alexandria till the recent plunder of manuscripts from India the same trend was followed with much rigor, stealth and cunningness.
When Greeks, Romans Egyptians and Persians were bygone past, their role was filled by the Catholic Church, Protestant Church, Arabs and Iranians who by now adapted Christianity and Islam as religion. When Arabs sacked Constantine Naples and blocked the land access to India and moved on to control the trade and business or resources coming from India, centuries of wars were raged in the name of crusades in Europe, Eurasia to retake the control of trade routed to India. When that effort failed, the then Catholic Church divided the world using scale and pencil under the “Treaty of Tordesillas” in two democratic equal halves and gave the east side of the then known world map with all countries to Portuguese and west side of the known world to Spanish. Later newly created nations of Western Europe, including England, Portugal, Spain, and France, became interested in seeking alternative routes for conducting trade with the East. Portugal began exploration of trade routes by sea, and in 1487 charted an ocean route around Africa to India. The actual reason for this agreement and all efforts they did for exploration was due to the necessity to find resources and to win Crusades; that can only be provided by India and other Eastern nations.
There is also a scheming commission catch to this agreement that whatever Spanish or Portuguese find in these new lands, from resources to men, women and children, a simple and mere minimal commission has to be given to the Catholic Church or Pope (Holy See as all called them at that time in trade) in return for providing logistics and support.
The search for a secure Indian sea route was prompted by two concerns.
- To have a resource base to fight with traditional crusade rivals
- To hunt down any Jewish population who by that time were, ‘supposedly’ controlling the business interests of the known world.
When Portuguese found the sea route to India, their first order of business was to find gold and spices and second was to find and kill all the Jews residing in India, which is now known as ‘Goan Inquisition’. The ferocity of such killing was so precise many Jews in those days took to Hinduism or Sanatana Dharma and adopted various communities’ practices for their survival when Arab Muslims threw them out of Europe and other areas of Central Asia. But they predominantly took practices of Teli community or the Brahmin community. Many Portuguese Governors then ordered killing of all Brahmins and Teli community members too irrespective of whether they were Jews or not which led to the exodus of these communities from the Portuguese occupied lands in India. After dealing with Jews Portuguese turned towards their traditional rivals Muslims.
British finally subdued the Spanish armadas using pirate army of John Drake who plundered the Spanish gold and enriched the British coffers. As a result Britain became a naval power and they figured out two things…
Read this exclusive research on the 2000 year suppressed secret history of India never taught in India nor even discussed in the mainstream media only in the Jul-Sept 2015 Inaugural Issue of GreatGameIndia – India’s only quarterly journal on Geopolitics and International Affairs.
Subscribe Now and help keep our research going.
India in Cognitive Dissonance
is a hard-hitting myth-buster
from the Editors of GreatGameIndia.
A timely reminder for the decadent Indian society;
a masterpiece on Geopolitics and International Relations
from an Indian perspective – it lays bare the hypocrisy
taken root in the Indian psyche because of the falsehoods
that Indian society has come to accept as eternal truth. | <urn:uuid:d8e94cc2-2394-406f-83b8-2d7efa86dd39> | CC-MAIN-2019-13 | http://greatgameindia.com/great-game-india-for-the-control-of-grand-glorious-india/?share=email | s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912203491.1/warc/CC-MAIN-20190324190033-20190324212033-00319.warc.gz | en | 0.964183 | 1,372 | 2.890625 | 3 |
Kirby Ferguson has written a summary for the book A Technique for Producing Ideas. Generating good idea is a fine art, if you have mastered it you will be successful in many fields. The author of the book, James Young, describes five steps on a technique of combining old elements together:
- Gather new material, both specific and general.
- The Mental Digestive Process
- Drop it
- Poof, the idea appears
- Work it
Kirby also brought out his own thoughts – drop down every ideas you have in mind – You mind is not always as good as paper and sometimes it only stays for a short period of time. After you’ve dropped your ideas into your notepad, you will also have extra chances of linking and modifying your ideas together.
Book summary: A Technique for Producing Ideas – [Goodie Bag]
Love this article? Share it with your friends on Facebook | <urn:uuid:d0682d8a-68fe-4d8c-81ae-4079a35d5cc1> | CC-MAIN-2016-44 | http://www.lifehack.org/articles/lifehack/book-summary-a-technique-for-producing-ideas.html | s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988719273.38/warc/CC-MAIN-20161020183839-00101-ip-10-171-6-4.ec2.internal.warc.gz | en | 0.938375 | 192 | 2.734375 | 3 |
Primary Computing. Made Easy.
1. Scratch Chat
What will you learn?
The basics of Scratch, including:
- Add backgrounds (stages) and characters (sprites)
- Write a simple program to have two sprites talk to each other.
- Write a program for a sprite (e.g ball) to move a certain distance
What will you need?
The Scratch website
2. Backgrounds (stages)
Watch the video to learn how to add backgrounds, which are called Stages in Scratch.
Once you have watched the video, try it yourself.
3. Characters (sprites)
Now add some characters (sprites) into your stage and move/resize them. For this program we are going have two animal sprites and a ball sprite.
Move forwardGlide to position5. Movements
Write a program for the ball sprite to move from one character sprite to another. You will need to experiment with the number of steps.
Remember you will need to leave a wait code block at the start to let the characters finish talking.Can you continue the conversation? How would the character send the ball back? | <urn:uuid:44740e95-9480-4c12-a766-21dd6ea14ca2> | CC-MAIN-2018-34 | https://www.ilearn2.co.uk/free-scratch-chat-pupil-activity.html | s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221213794.40/warc/CC-MAIN-20180818213032-20180818233032-00245.warc.gz | en | 0.917235 | 244 | 3.609375 | 4 |
Karen Norberg #1
If fabric art can create elegant models of the shape of the universe, it should come as no surprise that it can do the same for the brain. This is one of the stunning creations at the online Museum of Scientifically Accurate Fabric Brain Art (h/t "TNH's Particles" at Making Light).
It's fascinating how the mostly analog works of the fabric arts intertwine with the more digital representations of modern science and technology, going back at least as far as Ada, Countess of Lovelace, Byron's daughter who eloquently referred to weaving in writing her "Notes" about Charles Babbage's difference engine.
Who can foresee the consequences of such an invention? The Analytical Engine weaves algebraic patterns just as the Jacquard loom weaves flowers and leaves. The engine might compose elaborate and scientific pieces of music of any degree of complexity or extent.She's quoted in Women Weavers and Web Weavers, which draws additional parallels. | <urn:uuid:2e6666c9-6469-48fd-a499-8a934f11d0ac> | CC-MAIN-2018-17 | http://www.peterpatau.com/2006/10/museum-of-scientifically-accurate.html | s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125945793.18/warc/CC-MAIN-20180423050940-20180423070940-00578.warc.gz | en | 0.934366 | 204 | 2.8125 | 3 |
In the comments on my post about toxic food packaging labels, the subject of fruit and vegetable labels came up, those little plastic stickers affixed to almost all grocery store produce these days so cashiers don’t have to memorize the codes.
Back in the day when I was a kid, produce didn’t come with stickers. There were codes ink-stamped on some of the citrus, as I recall, but nothing like the plastic stickers we have today that are especially annoying when attached to soft-skinned fruit like ripe pears and peaches. Don’t you hate when the skin rips off with the sticker?
But what about the adhesive and the tiny bit of plastic the sticker represents? Is it something that should keep us awake at night? My feeling is that no, it should not, and before you crucify me, please let me explain why.
Stickers Differentiate Organic from Non-organic
Devised by the International Federation for Produce Standards, the PLU (Price Look-Up) codes on produce labels help us and grocery store employees differentiate between organic food and non. Four number codes indicate non-organic. Five number codes beginning with 9 indicate organic food. Codes beginning with 8 are supposed to indicate that produce is genetically-modified, but according to Charles Margulis from the Center for Environmental Health, apparently those GMO produce sticker codes are unreliable. Some companies use them and some don’t. So just because a piece of fruit is not labeled with an 8 doesn’t mean it’s not GMO.
Now think about all the chemicals used to grow non-organic foods. And think about the possibility of organic and non-organic fruits and vegetables getting confused by employees at the grocery store. I’d rather have a plastic sticker letting me, and them, know the difference than subject myself to residue from all the petrochemical pesticides and fertilizers used to grow conventional produce. Wouldn’t you?
Stickers Differentiate Local vs. Non
Those stickers also indicate the country and often the state of origin. If you’re concerned about reducing your food miles, it’s nice to know where your produce came from. Do you think a Safeway employee would be able to give you that information without a sticker on the fruit?
What to Do With Produce Stickers
So, you’ve eaten the fruit and still have the plastic sticker hanging around. What do you do with it? Well, certainly don’t put it in your compost bin with the peels, pits, and cores. A Fake Plastic Fish reader commented a while back about tossing all her fruit peels onto the compost pile, stickers and all, and ending up at the end of a season with a big pile of stickers. Throw them in the trash. Or get creative.
Collect them: Apparently, there is a whole community of people dedicated to collecting fruit labels the way people collect stamps. World of Fruit Labels bills itself as “The web’s first and oldest fruit label site. Founded as long ago as May 1999.” The site contains images of and information about over 1,000 different labels.
Make art: Barry Snyder of Stickerman Produce Art collects and makes amazing collage art from produce stickers. From a fruit sticker version of Andy Warhol’s Campbell’s Soup Can to to a beautiful pair of cowboy boots, Barry’s collages depict pretty much anything he thinks of. This one is probably my favorite, but that’s because I’m weird.
You can send your used fruit stickers to Barry at:
Barry “Wildman” Snyder
Erie, CO 80516
Avoiding Plastic Produce Stickers
All that said, I avoid plastic produce stickers. Why? For one, because the goal of my project on Fake Plastic Fish is to see how little plastic waste I can generate, and fruit stickers count as plastic waste. But the biggest reason is because I can!
Farmers Markets: I live in an area with year-round farmers markets, so I never have to buy produce from the grocery store. And none of the fruits and vegetables at the farmers market come with stickers. The vendors know their own produce. They don’t have to memorize hundreds of codes.
CSA’s: another source of stickerless produce is your local CSA, if you have one. Generally that food will be sticker-free as well.
Grow Your Own: If you need to put a sticker on produce you grow in your backyard, I would like to meet you because you are weirder than I am.
Picking Our Battles
We can pick our fruits and noses and battles. Compared to all the other sources of plastic pollution, fruit stickers are the least of our worries. Plus, they are actually useful. On a personal level, look at the areas of your life that still need de-plasticking work. Do you still end up with plastic grocery bags sometimes? Are you menaced by take-out food containers? Do your kids bring home cheap plastic Happy Meal crap? Yes? Then those are the areas I would focus on: remembering our travel mugs and water bottles and reusable bags; refusing plastic packaging as much as possible.
And if fruit stickers are the one last hold out in your quest to get disposable plastic out of your life, maybe it’s time for bigger action. How about getting involved in a campaign to ban or tax disposable bags in your city or state? What about writing to stores that you frequent and asking for more plastic-free options? How about organizing community swap meets so you can reduce the need to buy new durable plastic products? The sky is the limit.
I’m not saying that I think it’s just fine and dandy to have plastic and adhesive stuck to our fruits. I’m just saying I think we have bigger issues to worry about. | <urn:uuid:13a64b8c-2038-4058-9fb9-0af5cf0d27a5> | CC-MAIN-2017-13 | https://myplasticfreelife.com/2010/06/should-we-worry-about-little-plastic-produce-stickers/ | s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218188824.36/warc/CC-MAIN-20170322212948-00028-ip-10-233-31-227.ec2.internal.warc.gz | en | 0.944051 | 1,234 | 2.515625 | 3 |
Before you do anything with a bike, make sure it fits the rider, especially if you're checking bikes for growing kids. If it doesn't fit well, it's a safety risk. The best way to measure this is by standing over the bike. With your feet flat on the ground, there should be 1" to 2" between you and the top of the bike. Next, adjust the seat so when you sit, your toes should just barely touch the ground. When you pedal, your legs should extend until they're almost, but not quite, straight. Most bicycle seats can be raised and lowered using an Allen wrench.
Make sure the wheels are rolling smoothly. Turn the bike upside down and spin them fast a few times. They should spin without hitting the brakes or any other components. If the wheels are wobbly, that means the spokes need to be adjusted or the wheels should be replaced. If the wheels are OK, make sure the tires are inflated. Like car tires, the PSI (pounds per square inch) is printed on the side of the tires. A pump with a gauge will tell you what PSI your tire is inflated to. You won't get very far on flat tires.
Look at your brake pads before you get on your bike. They should be thick and have grooves in them. If they're thin and smooth, they need to be replaced. If they're OK, take the bike on a short test ride. When you brake, your bike should come to a quick, complete stop. If it doesn't, you may have to adjust the pads. If they need to be adjusted, it could be a problem with the brake cables, which a bicycle mechanic can help you with.
The chain is a little bit more difficult to check on your own, but maintenance is important. Chains don't last forever. In fact, they don't last long at all. If you bike a lot, a chain lasts about a year. If you have had your chain for more than a year, definitely check for wear. Without getting too detailed, there is a chain wear tool that you can apply to your chain. It's narrow piece of metal with knobs on each end. If both knobs fit between the links, you need a new chain. If it doesn't, you can keep it a little longer. This is also something a bicycle mechanic can help you with. Once you've determined whether your chain is OK or you need to get a new one, also make sure it's lubricated.
Once your bike is ready to ride, make sure you're ready to ride with these safety tips:
- Always wear a helmet
- If you're wearing pants, wear a trouser clip so your pants don't get tangled in your gears
- Make sure your bike has reflectors or you're wearing reflective clothing
- Use head and tail lights when riding at night
- Obey traffic laws that apply to cars
- Always use hand signals so cars know what you're doing
Now that you've passed inspection, load up your bike with drinkware, and roll on! | <urn:uuid:7a2c3b30-b772-4a04-9aec-7246dd7ac3ea> | CC-MAIN-2017-22 | https://www.lakeside.com/browse/Bike-Safety-Checklist/_/N-1z100psZ1z0xzct | s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463608622.82/warc/CC-MAIN-20170526013116-20170526033116-00243.warc.gz | en | 0.962767 | 636 | 2.90625 | 3 |
(Post by Bruce Abbott)
As a musical performer I've always been intrigued by musical styles and the challenge of authentically playing in a given style. What makes classical music sound classical, rock sound rock, etc.?
Style implies uniformity, an overall set of rules or standards. Then, within those sets of rules, it's the subtle differences, the rearranging of order, the little twists and turns that a particular composer or performer takes that make each piece in that style unique and interesting. Repetition and variation.
A criticism sometimes levied at an unfamiliar or disliked style of music is that "It all sounds the same". Well, in fact, it does. That's what a style is supposed to do - repeat itself. But once we are familiar with the individual pieces within a style we can become immersed in the unique subtleties of each piece or performance and focus on the variations. The repetition provides form and familiarity while the variation provides interest.
For a non-musical example, when I toured Korea with the Rhode Island Saxophone Quartet it took me about three days before I stopped getting the names of our Korean hosts wrong. There was a uniformity to their appearance that at first struck me as "all the same". When I was finally able to perceive the uniqueness and subtle differences of each person in the group I no longer got their names mixed up.
For a musical example, cool jazz and hip hop are often perceived as very relaxed and laid back styles. They definitely have that vibe to them, but they are very precisely relaxed and laid back. There are very specific things that the musician must do to achieve that feeling. A style of rock music might be described as "driving"; something classical might be "expansive". How is all that achieved?
To perform as authentically as possible in a given style I need to discover the uniform characteristics of the style as well as the subtleties of the differences. Repetition and variation. | <urn:uuid:cec4ed1c-b74b-4d7d-b3cb-d685266dbc23> | CC-MAIN-2019-47 | https://www.northstarjazz.com/blog/musical-style-repetition-and-variation | s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496670535.9/warc/CC-MAIN-20191120083921-20191120111921-00114.warc.gz | en | 0.9729 | 399 | 2.640625 | 3 |
Definition: Accounting method that records revenues and expenses when they are incurred, regardless of when cash is exchanged. The term "accrual" refers to any individual entry recording revenue or expense in the absence of a cash transaction .
Most businesses typically use one of two basic accounting methods in their bookkeeping systems: cash basis or accrual basis. While most businesses use the accrual basis, the most appropriate method for your company depends on your sales volume, whether or not you sell on credit and your business structure.
The cash method is the most simple in that the books are kept based on the actual flow of cash in and out of the business. Income is recorded when it's received, and expenses are reported when they're actually paid. The cash method is used by many sole proprietors and businesses with no inventory. From a tax standpoint, it is sometimes advantageous for a new business to use the cash method of accounting. That way, recording income can be put off until the next tax year, while expenses are counted right away.
With the accrual method, income and expenses are recorded as they occur, regardless of whether or not cash has actually changed hands. An excellent example is a sale on credit. The sale is entered into the books when the invoice is generated rather than when the cash is collected. Likewise, an expense occurs when materials are ordered or when a workday has been logged in by an employee, not when the check is actually written. The downside of this method is that you pay income taxes on revenue before you've actually received it.
Should you use the cash or accrual method in your business? The accrual method is required if your business's annual sales exceed $5 million and your venture is structured as a corporation. In addition, businesses with inventory must also use the accrual method. It's also highly recommended for any business that sells on credit, as it more accurately matches income and expenses during a given time period.
The cash method may be appropriate for a small, cash-based business or a small service company. You should consult your accountant when deciding on an accounting method. | <urn:uuid:6cd08c7a-b239-420d-a99e-c8e34b1ab7c7> | CC-MAIN-2017-39 | https://www.entrepreneur.com/encyclopedia/accrual-accounting | s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818685129.23/warc/CC-MAIN-20170919112242-20170919132242-00534.warc.gz | en | 0.969093 | 433 | 3.46875 | 3 |
Focus: Stopping Sound with Foam
While investigating the use of ultrasound to probe the structure of a liquid foam, researchers in France came across an unexpected result. In Physical Review Letters, they report that the foam completely blocked the transmission of ultrasound waves in a certain range of frequencies. Their experiments are the first demonstration that a foam can act as a metamaterial—a material whose complex internal structure endows it with unusual physical properties. The researchers suggest that such foams may have practical value as acoustic insulators.
Liquid foams are used in industry, including oil spill recovery, and as consumer products, such as shaving foam and whipped cream. The size range and distribution of bubbles in foams influence their properties and behavior, but the fragility and opacity of such foams make them hard to study. Valentin Leroy of the University of Paris-Diderot and his colleagues reasoned that measuring the transmission of sound waves through foams would be a good way to probe their structure.
The team made foams using a two-syringe method that injected air saturated with the insoluble gas perfluorohexane ( ) into water mixed with sodium dodecyl sulfate, a surfactant that stabilizes bubbles. Once a liquid foam had formed, its median bubble size grew steadily from to micrometers ( ) over a period of about minutes. Leroy and his colleagues measured the attenuation of ultrasound pulses that passed through a layer of foam sandwiched between two polymer films spaced millimeters apart. They plotted the attenuation versus bubble size and ultrasound frequency, which ranged from to kilohertz (kHz). The discovery that foam could totally block transmission came as a surprise. The range of suppressed transmission was centered at about for a median bubble size of and at about when the bubble size increased to .
An ability to block transmission in a certain frequency range was first demonstrated in 1991 for microwave radiation passing through a structure with a periodic pattern of nanosized holes (see 23 August 2013 Focus Landmark). In this periodic structure, known as a photonic crystal, interference among waves traveling different routes prevents transmission at certain frequencies. Researchers have also created similarly structured metamaterials for sound waves, but they hadn’t previously observed an acoustic metamaterial with an irregular structure, such as a foam.
Another research team recently showed that a series of thin films placed crosswise inside a tube acted as a metamaterial, strongly attenuating sound waves within a restricted frequency range . To explain their own results, Leroy and his colleagues devised a model inspired by this previous work. They imagined thin flexible films stretching across a series of rigid rings, forming a stack of cells. This structure represents a column cutting through a foam—the film mimics the boundary between two bubbles, while the rings act like the liquid channels that form where several bubbles meet.
The team analyzed the passage of sound waves through this model. At the lowest frequencies, the rings and the films all move in synch with the sound waves. As the frequency rises, the heavier rings begin to lag, while the films, whose behavior is driven by their elasticity rather than their inertia, are able to keep pace. Eventually, the rings and films become so out of synch that the bulk of the mass in the model is moving with opposite phase to the sound waves, so no sound is transmitted. At higher frequencies still, the films are trying to move so fast that their inertia, not their elasticity, determines their motion. The rings and films then behave in similar ways, and sound passes again.
John Pendry of Imperial College London, who pioneered the physics of metamaterials, says that foam definitely qualifies: “A metamaterial is a material whose properties are determined through the structure rather than the composition, and a foam fulfills that very nicely.” Leroy and his colleagues plan to investigate whether more durable foams made of elastic materials will show the same behavior and will be useful for sound barriers.
Alexander Hellemans is a freelance science writer in Naples, Italy, and a contributor to the upcoming book 30-Second Quantum Theory (Icon Books, June 2014).
- S. H. Lee, C. M. Park, Y. M. Seo, Z. G. Wang, and C. K. Kim, “Acoustic Metamaterial with Negative Density,” Phys. Lett. A 373, 4464 (2009) | <urn:uuid:353de539-b1ea-4b1b-82ad-cd70a7145b6a> | CC-MAIN-2017-17 | https://physics.aps.org/articles/v7/37 | s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917121153.91/warc/CC-MAIN-20170423031201-00457-ip-10-145-167-34.ec2.internal.warc.gz | en | 0.945093 | 907 | 3.65625 | 4 |
Scientists spotted water in the atmosphere of 51 Pegasi b, one of the first exoplanets ever been discovered. It is around 50 light years away – so we can call it a “nearby” exoplanet – and it is in the constellation of Pegasus.
Continue reading Water found on an exoplanet
Proxima Centauri b, the Earth-like planet orbiting the red dwarf Proxima Centauri may have oceans, scientists say. The planet was discovered in August 2016 and caused excitement because it’s in the habitable zone of its star, and it’s rocky. And it’s in the Alpha Centauri, the closest star system to us!
Continue reading Proxima Centauri b May Have Oceans
On august 24, 2016, a group of scientists led by Dr. Guillem Anglada-Escude at the Queen Mary University of London, announced the discovery of a terrestrial exoplanet orbiting the red dwarf Proxima Centauri, the nearest known star to the Sun. Proxima Centauri is a Latin idiom, meaning “nearest (star) of Centaurus(1)“. The new planet is named Proxima Centauri b and it is predicted to be orbiting within the habitable zone!
Continue reading Proxima Centauri b: Did We Find Earth’s Cousin? | <urn:uuid:c5019284-1636-4775-824e-db1500fc6ebc> | CC-MAIN-2017-47 | https://ourplnt.com/tag/proxima-centauri-b/ | s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934806760.43/warc/CC-MAIN-20171123070158-20171123090158-00569.warc.gz | en | 0.886304 | 279 | 2.9375 | 3 |
This new strategy brief from the Finance Project and the Council of Chief State School Officers describes how six major funding streams included in the No Child Left Behind Act (NCLB) can support extended learning opportunities.
'Using NCLB Funds to Support Extended Learning Time: Opportunities for Afterschool Programs,' provides important context for those seeking to access these funding streams, and includes a discussion of strategies, considerations and tips for accessing each source.
Laptops, Internet access, scanners, and video cameras can help teachers and students access information and resources quickly and easily. Digital imagery, PowerPoint presentations, and microphones create fun, interactive classrooms.
But access and ease do not equal knowledge and comprehension, according to Peter N. Berger in his Oct. 26 EDUCATION WEEK Commentary. In the midst of all the educational technology hype, Berger writes that we have lost sight of the basics of learning and teaching.
How effective is cutting-edge equipment in improving actual achievement?
The ITEST (IT Experiences for Students and Teachers) Learning Resource Center, in which YouthLearn is a partner, recently held a public event on Engaging Girls in Science and IT. This eSchool News feature story provides a good summary, in case you missed the webcast:
"To engage girls in the study of science and technology, educators need to convey the right message about the roles these fields play in society and the skills they require--and they also need to provide more hands-on activities that have some social value.
These were the main lessons imparted during a Sept.
"The Japan Foundation Center for Global Partnership (http://www.cgp.org/) is a grantmaking organization that works to promote mutual understanding between the United States and Japan on contemporary social issues.
CGP's Grassroots Exchange Program provides support for exchange projects that involve issue-oriented cooperation and learning among youth, nonprofit organizations, and members of the general public in both the U.S.
"ITEST is designed to increase the opportunities for students and teachers to learn about, experience, and use information technologies within the context of science, technology, engineering, and mathematics (STEM), including Information Technology (IT) courses.
This article captures "recent conferences, collaborations, and venues for professional development" opportunities aimed for nurturing the field of youth media. YouthLearn's current projects related to youth media are identified in this article.
"Youth media has become a bona fide field with its own practices, philosophies, and goals."
"Not long ago, many of us working in youth media did not consider ourselves part of a field. And, really, why would we? Opportunities to share practices and collaborate with others working on teen-produced media were few and far between.
The Action Coalition for Media Education (ACME at http://www.acmecoalition.org) continues its 2005-2006 "Monthly Media Education" resource offerings with their "ACME Food For Thought Tool Kit", a FREE downloadable resource detailing a wide variety of ways health, language arts, history, civics, social studies and journalism teachers can use media education to focus on a host of food-related matters: obesity, nutrition, health, corporate power, agriculture - of vital importance to all of the globe's citizens today." | <urn:uuid:3506c4dc-f047-4828-b5f5-f1b34e2c061a> | CC-MAIN-2013-20 | http://www.youthlearn.org/newsarchive/2005?page=3 | s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368707188217/warc/CC-MAIN-20130516122628-00025-ip-10-60-113-184.ec2.internal.warc.gz | en | 0.933045 | 667 | 2.953125 | 3 |
What is Bias
Bias is the human tendency to make systematic errors
in judgement or when making decision
based upon certain thinking, thoughts or preconceived notions
Creating and managing an investment portfolio requires decisions to be made on – how to invest, in which asset classes, timing of entry/exit and reviewing/rebalancing the portfolio. Decisions ought to be based on the analysis of available information so as to optimise expected performance and risks associated with such investment. Very often the decisions are influenced by behavioural biases in the decision maker. Sometime even experienced Fund Manager may also fall prey to it, which leads to less than optimal choices being made.
An investor faces several hurdles, minor and major both. These include personal ones like lack of knowledge and ability to invest at the optimum levels. Some of the well documented biases that are observed in investment decision making are:
Disposition Bias: An important hurdle that often comes in the way of realizing an investor’s financial dream is the emotions of the person that mislead him to divert from what he should ideally do. It’s always better to be informed about such emotional hurdles in investing before it is too late. For example, people often keep on holding stocks bought years ago and are still in the red, but prefers to sell off those stocks in which they have made profit and has further potential to make more profit. In this case termed as disposition bias in financial economics literature, investors tend to hold on to their losing bets in the hope of recouping their losses sometime in the future but feel good to make some small profit by disposing off their winners.
Optimism or Confidence Bias: Investors cultivate a belief that they have the ability to
Out-perform the market based on some investing successes. Such winners are more often than not short-term in nature and may be the outcome of chance rather than skill. If investors do not recognize the bias, they will continue to make their decisions based on what they feel is right than on objective information.
Familiarity Bias: This bias leads investors to choose what they are comfortable with. This may be asset class they are familiar with or stocks/sectors about which they have greater information and so on. Investors holding an only real estate portfolio or a stock portfolio concentrated in shares of a particular company or sector are demonstrating this bias. It leads to concentrated portfolios that may be unsuitable for the investor’s requirements and feature higher risk of exposure to the preferred investment. Since other opportunities are avoided, the portfolio is likely to be underperforming.
Anchoring: Investors hold on to some information that may no longer be relevant, and make their decisions based on that. New information is labelled as incorrect or irrelevant and ignored in the decision making process. Investors who wait for the ‘right price’ to sell even when new information indicate that the expected price is no longer appropriate, exhibit this bias. For example, they may be holding on to losing stocks in expectation of the price regaining levels that are no longer viable given current information, and this impacts the overall portfolio returns.
Loss Aversion: The fear of losses leads to inaction. Studies show that the pain of loss is twice as strong as the pleasure they felt at a gain of a similar magnitude. Investors prefer to do nothing despite information and analysis favouring a particular action that in the mind of the investor may lead to a loss. Holding on to losing stocks, avoiding riskier asset classes like
Equity, when there is a lot of information available on market volatility are manifestations of this bias. In such situations investors tend to frequently evaluate their portfolio’s performance, and any short-term loss seen in the portfolio makes inaction the preferred strategy.
Herd Mentality: This bias is an outcome of uncertainty and belief that others may have
better information, which leads investors to follow the choices that others make. Such choices may seem right and even be justified by short-term performance, but often lead to bubbles and crashes. Small investors keep watching other participants for confirmation and then end up entering when the markets are over heated and poised for correction.
Demonstrative Effect: Investors, especially new to investing, often get carried away by what their friends or relatives say. There are people who boast how they have made multiple times in some stock, which has a demonstrative effect on the new comer, who without understanding even the basics of investing, just dive into putting his money into something which could be an extremely risky bet or may not even be suitable for him. In such situations, it is also seen that the person claiming his multiple winning stock to people known to him, may also have hidden stocks and investments in which he has lost money. So it’s very important for investors to keep away from such demonstrative effects which could, in the long run, prove to be a loss making proposition.
Recency Bias: One of very strong emotional bias is “recency bias“, the phenomenon of a person most easily remembering something that has happened recently, compared to something that may have occurred a while back. The impact of recent events on decision making can be very strong. This applies equally to positive and negative experiences. Investors tend to extrapolate the event into the future and a repeat. A bear market or financial crisis lead people to prefer safe assets. Similarly a bull market make people allocate more than what is advised to risky assets. The recent experience overrides analysis in decision making. So everybody expect the recent performance to continue over a future period which is not true.
Choice Paralysis: The availability of too many options for investment as also too much of information can lead to a situation of not wanting to evaluate and make the decision.
Greed: At times there are people who claim that they could give very high returns, compared to the accepted market linked returns of financial products, and that too within a short period of time. Investors should be careful about such tall claims before investing
Individual investors can also reduce the effect of such biases by adopting few techniques. As far as possible the focus should be on data and interpreting & understanding it. Setting in place automated and process-oriented investing and reviewing methods can help biases such as inertia and inaction. Facility such as systematic investing helps here. Over evaluation can be avoided by doing periodic reviews. A rational investor would probably look at the portfolio of all the stock as a whole and decide which ones to sell off and in which ones they should remain invested, irrespective of the losers and gainers status.
It is always good to have a financial adviser the investor can trust who will take a more objective view of the investor’s finances in making decisions and will also help prevent biases from creeping in.
Best way to invest, for new as well as experienced investors, is to have a disciplined approach, patience and diversify.
For more, please visit our website | <urn:uuid:821f0eaa-4c3a-4530-a8fc-396dfb95e5ea> | CC-MAIN-2021-31 | https://m4moneyblog.wordpress.com/2016/01/ | s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154126.73/warc/CC-MAIN-20210731203400-20210731233400-00456.warc.gz | en | 0.96334 | 1,406 | 3.6875 | 4 |
Ant Nests or Homes for your ants.
An ant nest in the wild is called a Formicary. This is taken from the old language of Latin, which is often used along with ancient Greek to classify and identify practically all species of plants, animals, insects etc. Any type of ant set-up that you may choose to use is referred to as a "Formicarium", whether it is a ready made one, or something you have made yourself, such as the ones sold in the UK called "ANT WORLD", or a plaster type nest. Many kinds of containers can be used to house your ants in. The most important thing is, it must be escape proof and also transparent, so that you can observe the ants without disturbing them too much! My own personal preference is for the kind of upright double glazed window, with 2 clear sides made from glass or plastic. My wife does not object to my keeping ants; but she might do if she found them wandering freely around her kitchen eating our food or that of our cats. So for this reason alone, I tend to keep my ants in close confinement.
Try when and wherever possible to keep your ants away from direct sunlight, and please remember to cover your Formicarium with something to exclude the light when you are not observing their behaviour, as although ants are fine with light, in the wild the nest is alway in darkness underground or covered by stones or vegetation. It is fascinating to watch ants going about their daily activities, digging tunnels and foraging for food. If you have any brood (eggs, lavae or pupae [i.e. "baby ants"] ) present in your ant colony, then you will see nurse ants caring for them. If you only decide to keep a few worker ants to study, and not have any functional queen present, then keep around 20-30 as an ideal number. Ants are social insects, and while a single ant can function alone, she does better with some of her sisters around to help out.
Think of an army with only 1 soldier, it wouldn't win any wars; and "Many hands make light work" the saying goes. So it is with ants, the more workers there are, the more easily and faster the work gets done; and although you may think a worker ant is always busy working, they do like to rest just the same as you or I do.
What medium is best to use?
Ant keepers often talk about what is the right kind of medium (soil type) or set up to keep ants in?
Some ant keepers prefer to use a Ytong (aerated concrete blocks) nest, while some like plaster nests or test tubes. Others like to keep their ants in sand, or a fancy mix that may come supplied with a set up that they buy.
Ask yourself what do ants nest in under natural conditions in the wild?
Around 95% of all ant species nest in good old fashioned soil, which let's face it, Mother Earth has supplied in abundance in whatever country or continent you happen to live in?
Okay, so some ants nest in wood or make nests from leaves or even their own living bodies; but the vast majority live as nature has provided, in nothing more than excavated networks of soil based cities.
Ants are not normally born into a glass test tube, a plaster nest or a nest made from concrete. Of course it isn't always easy to provide the exact same soil as the ants live in naturally, as ants which come from other countries may nest in a soil type which is completely different from that which is found in your garden; but if you add a little extra humus (leaf mould) or peat, or sand in the case of desert dwelling ant species, then many ants will adapt perfectly to ordinary dirt as long as it is kept a bit damp.
Just remember that while it might be fine to raise a colony in a glass test tube, this is not the ideal place for keeping them in long term; and while most ants do very well in an artificial nest, it does make them a lot happier to do what they would do in the wild; and that is to dig out their own nest to suit their own requirements for that species/genera, so please do consider this fact when keeping your ants in a long term set up.
Formicariums I use for my ants.
Most people who visit this site will probably wonder, and ask themselves this question. "What does he keep his own ants in?"
Well I must be honest and say, whatever I find the most suitable and convenient set up for the particular species I am wanting to keep in it. However, to avoid confusing people out there who may think some of the set ups shown elsewhere on this site, which are used by ant collectors other than myself; I thought it would be a good idea to place some pictures of the kind of set ups I am currently using, as well as ones I am also considering using in the near future!
You can use plastic or glass fish tanks, or the kind of plastic tanks sold for keeping small spiders or frogs. Never go too large for the species you want to keep, but on the other hand don't go too small either, as ants do like to have a reasonable space to live and forage in.
Again keep small ants in a smaller set up, and larger ants in a bigger one which will provide more space. Beware that ants can climb up glass or plastic walls, and so may escape unless you can find ways of preventing this. I have found that Messor ants are very good in tanks, as they tend to not climb the walls. Whereas Formica run up and out as fast as you put them in, even across any barrier that may be in their way. Though Camponotus seem less inclined to do this, even though they can!
I recently bought these 2 very sturdy plastic tanks, which are of a nice size for homing a medium sized ant colony, as they are 8 inches long and 4" wide with a depth of about 6" ( 1 inch is 2.5cm).
I have found these set ups very good for keeping smaller ants, such as Lasius, Myrmica or Tetramorium; but less so for larger ants like Camponotus or Formica. I would also suggest that they are not really suitable for ants such as Messor, Myrmecocystus or Pogonomyrmex species. The reason is, the walls are quite narrow and while a small species queen can move and turn easily between them, a larger ant can not, even if the species has small workers.
The largest queens I have tried out in an Ant World were Formica fusca, and they only just managed to move about. However for smaller queens and their workers, they are great. Although very tiny workers can sometimes squeeze their way out through the air holes in the top of the set up; but on the whole they are fairly escape proof and safe.
This type of ant farm can be purchased from here} http://www.interplaydirect.co.uk/ant-world-1-p.asp
Making a Plaster Nest.
This method for making a nice and really useful nest was the idea of Luke Goddard, a member of Ant Hill World forum, and comes with pictures to explain how to make it. Below is Luke's guide to making a plaster nest.
A plaster nest is a very good way to keep a colony, no matter how big or small. They are also quite easy and cheap to make, costing no more than £10 for the materials. Also they are good for studying the ants nests as you can see everything. There are quite a few things you must consider before making one though, such as:
Where - You need somewhere to have the thing!
What - You need to know what you are keeping in it to determine sizes.
Why - Will you design it to look stylish, to be good for the ants or easy to make?
How - How are you going to use it, will there be more basins or only the nest?
Once you have decided these you can feel free to design your nest, you don't need to make a plan but remember: to fail to prepare is to prepare to fail. Make sure the ants will actually live in it, they really will not will not 4 foot wide chambers or a swimming pool will they? So keep it realistic.
If you keep to around the width of a pencil for the tunnels and the chambers to range from around the size of a 50p coin and around three times that size then it should be successful. How large the whole thing becomes is completely your choice, just think about how large your colony is going to be, where you are going to keep it etc.
I have listed here everything you will need for building, and some places you can get them from, with approx prices:
Plaster - This is probably one of the easiest to get hold of. You can get it from most DIY stores such as Homebase or B&Q. Most types of plaster will work but try to avoid the under cost plaster as it will be very rough and will not be as effective to use. You will want to go for multi use or the finishing coat plaster (i have used the finishing coat). A bag of this will not be too expensive.
Glass - This is just a sheet of square glass to act as a lid/window. You can get this from loads of places. I personally buy them in the form of a large picture frame and then take the glass out, this will only cost a few pounds from a cheap shop like Wilkinson's. Take note of the exact one you brought in case you glass ever needs to be replaced.
Plasticine - You will need to create a mould for the plaster and Plasticine is usually the best thing to use for this. You can use other things if you really want like blue tack, play-doh or maybe some objects you could mould around. I found that the best offer is in Toys 'R' Us and is a 'Plasticine activity bucket' and is what i use. It costs around £5 which is not too steep as it can be re-used as many times as you want.
These are only the core things needed to create the nest, but also other things could be used for accessories, like as you can see from the picture further down in this page, i have a piece of hose pipe which i use to get water into the nest.
Once you have all your materials its time to make. I have made a step by step here with pictures to help. Hope it guides you well.
1) Firstly you will need to make your mould. Take your piece of glass and using Plasticine (or whatever you decided to use) start to build the negative of what you want you nest to be. NOTE - Before you start putting your Plasticine on it is a very good idea to coat the glass with cling film first or grease it in someway, i didn't one time and the glass cracked and snapped when i tried to get it out.
2) Now you should have something which looks like this:
Make sure there is a way out (like at the bottom of the above picture) if you plan to attach it to another basin.
It is now time to setup and get ready for the plaster. If you have a board big and strong enough I would do it all on that. Lay down some newspapers to protect the surface and also something water resistant like cling film or kitchen foil.
Now you need a 'wall' to keep all the plaster in once you pour it in. I decided to use some Lego blocks for mine but you can do lots of things, such as using some planks of woods or maybe putting it in a box etc. I would recommend leaving a small gap of around 5mm on each side so that the glass has a 'slot' and will not move once in use.
3) Now it is time for the plaster itself. Mix up what you think will be enough using the instructions on the packet. Then you just simply pour into your mould, some can be quite thick so make sure it gets everywhere. Get it up to the top of the wall if you can, but don't let it overflow!
4) You now need to smooth out the top so that it will have a flat bottom. Either use something like a trowel to smooth it yourself or, if you used a board to make it on, just shake the board gently and it should eventually all be level.
5) The plaster needs to set now, although it may only say it will take around 2 hours to dry on the packet, it is likely to take longer as it will be thicker than what it is supposed to be used for. I would leave it overnight if i were you to make sure it is really set before taking it apart.
6) Once set there is not much left to do. Start by taking the walls off around the nest and then flip it over. You should have something like this:
7) This part of the process is worth being very careful with. You must now take the glass off. Be very gentle when doing this so that you do not snap it, if you used cling film or have greased it, it should come off with ease. Once this is off it is the simple process of getting the Plasticine out. Use a knife or some other tool or just your hands if you like to get it out.
Give yourself a pat on the back, your now done. You may want to give it a clean, especially the glass, before putting anything in there. It's good to put some sand in places so that the ants are more likely to settle well. I hope you made it well and enjoyed it.
My thanks to Luke Goddard for this great step by step guide on how to make a Plaster Nest. | <urn:uuid:866440c0-bd33-47f0-8129-d07011b95ff5> | CC-MAIN-2017-43 | http://www.anthillwood.com/index.asp?pageid=380430 | s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187820700.4/warc/CC-MAIN-20171017033641-20171017053641-00685.warc.gz | en | 0.971759 | 2,877 | 2.6875 | 3 |
Starting in spring next year, a crew of six will be sent on a 500 day simulated mission to Mars. In reality the crew will remain in a special isolation facility in Russia. To investigate the psychological and medical aspects of a long-duration mission, such as to Mars, ESA is looking for experiment proposals for research to be carried out during their stay.
During the simulated Mars mission, known as Mars500, the crew will be put through all kinds of scenarios as if they really were travelling to the Red Planet – including a launch, an outward journey of up to 250 days, arrival at Mars and, after an excursion to the surface, they will face the long journey home.
Locked in the facility in Moscow, the crew will have tasks similar to those they would have on a real space mission. They will have to cope with simulated emergencies; they may even have real emergencies or illnesses. Communication delays of as much as 20 minutes each way will not make life any easier.
Instead of having a spacecraft as their home, the crew will live in a series of metal tanks. Using narrow connecting passages, they can move between a medical area, a research area, a crew compartment and a kitchen – an area of only 200m2. There is even a special tank representing the Mars descent vehicle for simulation of a stay on the Martian surface.
ESA will participate in the study organised by the Russian Institute for Biomedical Problems (IBMP), and hopes to learn how to prepare for a real mission to Mars in the future. Following an Announcement of Opportunity, ESA is now looking for scientific experiments that can be integrated into the study.
In an interview ESA scientist Marc Heppener told us more about the Mars500 study.
Why is ESA participating in this study?
Our main interest is to look at the psychology of such a mission, knowing that you are enclosed for 500 days. As soon as there is a problem, the crew knows that they are on their own, and they have to solve it themselves. The only help available from the outside is through communications which may take up to 40 minutes.
At the start of their mission the crew will be supplied with all the food they will have to live off for the duration of the study. They have to keep track of their consumables amongst themselves. This limited food supply could lead to additional tensions amongst the crew.
We want to look at the psychological effects of the situation on your mental well-being, and on your capabilities of performing certain tasks, even tasks critical to the mission. In a real mission, for example, whether you are able to land a vehicle on the surface of Mars, and are you able to do the science once you are there? How will group relations evolve? What are the potential dangers could we encounter? What kind of countermeasures can we invent that can prevent this? For us we can also learn about what types of personality we should select for a real mission.
Almost as important; we are keen to learn more about the medical procedures. How do you define a good medical environment so that you can treat diseases? What are the medicines that you want to take with you on the journey? There will be one person amongst the crew with real medical training. But of course that person can also fall ill. So you have to have all kinds of back-up scenarios. To think all of that through is really difficult. We think doing a full simulation will teach us a lot.
And what is ESA's involvement in carrying out this study?
We are still negotiating our contract with IBMP. The basic agreement is that we are a full partner in the project, which is largely funded by Roscosmos with an important involvement of the Russian Academy of Sciences. ESA will be involved at all levels.
We will propose two volunteers out of the six people in the facility. We will also be involved in the full mission definition – all the steering boards, medical boards, the operations team who are from the outside communicating with the crew inside. That is also very important for us. We have experience in having astronauts flying on the International Space Station, but having astronauts travelling to Mars is a whole different ball game. And we will also be able to propose a full set of science proposals that we want to be executed.
So exactly what kind of experiments are you looking for?
We have a first draft list of the kind of science we are looking for. Such as crew composition, the influence of confinement on sleep, mood and mental health, and the effect of differences in personality, cultural background and motivation. But also on the medical side – physiological adaptation to an isolated environment, stress effects on health and well-being, changes in the immune system.
These are just a few examples of what we came up with as first ideas – but we are open to all good scientific proposals. Following a peer review we will make our selection of the best science. The Russians will also make their selection, and then a steering committee integrates all the science projects into one final project.
I should add that in parallel to this Announcement of Opportunity we also send one out for research on the Concordia Station. There we cooperate with the French and Italian owners of this Antarctic research station. Concordia has a similar objective to the Mars study, although it is a very different environment. We hope actually that a lot of scientists will propose things in parallel to both studies because that would be interesting to compare.
The concept sounds a bit like a reality TV show – is that a fair comparison?
Well, yes and no! Honestly, I believe it is fair to look at it that way - you could even push the comparison pretty far. Both look at interaction between people in all kinds of different situations. If you want there is even a prize at the end – not in the simulation – but if it is a real mission you will be the first person to walk on the surface of Mars, which is huge prize!
The comparison comes to a very sharp dead-end though - we will do a serious science experiment, and this is actually the only way we can prepare ourselves properly for a really long-duration spaceflight mission.
In the final set-up we will make sure that this is a good environment that it is safe and people are doing serious work also inside the facility. It is not entertainment – not at all. Having said that, part of a mission to Mars would also be the press interest it generates. We are still considering whether we should simulate that aspect.
What kind of people will you be looking for?
People who go through this selection will find they are looked at pretty much the same way an astronaut is selected. With our knowledge of astronaut selection and our involvement in the selection of subjects for bed rest studies - we have some basic knowledge about the type of people who would fit in this type of study. We will apply those criteria.
Of course we do want to have a reasonable reflection of a real crew – there should be people with medical qualifications, there should be some engineering qualifications, some science – it should really reflect that type of crew you would put on a real mission to Mars. We might be a little bit less strict about physical capabilities.
The volunteers will need to be away from work and family for an extended period of time. You might be away from home for one and a half years, maybe even longer for the full duration of the study itself, but also for training before and for tests after the study: we will follow those people after they have returned. It might be that effects are still visible after a year or longer and we will want to include that in our data.
Will they be paid for taking part?
Yes, there will be some compensation although it will not be a big salary. Legally there are some rules about the amount you have to pay volunteers each day. We are still discussing this with our Russian colleagues.
When will you start the process of finding volunteers?
In mid-June we will call for volunteers – probably through an announcement on the web. Our own pre-selection will then be followed with a selection by the integrated IBMP/ESA team. We believe we are going to have the selection concluded by November this year. | <urn:uuid:5166fad9-e786-4637-b299-d2a15c5aa85d> | CC-MAIN-2020-40 | http://www.esa.int/Science_Exploration/Human_and_Robotic_Exploration/Mars500/ESA_prepares_for_a_human_mission_to_Mars | s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400227524.63/warc/CC-MAIN-20200925150904-20200925180904-00755.warc.gz | en | 0.968923 | 1,676 | 3.15625 | 3 |
This CBC report is one of many dozens of articles in the world’s press highlighting one rather small but startling assertion in a recent OECD report on the effects of Covid-19 on education – that the ‘lost’ third of a year of schooling in many countries will lead to an overall lasting drop in GDP of 1.5% across the world. Though it contains many more fascinating and useful insights that are far more significant and helpful, the report itself does make this assertion quite early on and repeats it for good measure, so it is not surprising that journalists have jumped on it. It is important to observe, though, that the reasoning behind it is based on a model developed by Hanushek and Woessman over several years, and an unpublished article by the authors that tries to explain variations in global productivity according to amount and – far more importantly – the quality of education: that long-run productivity is a direct consequence of the cognitive skills (or knowledge capital) of a nation, that can be mapped directly to how well and how much the population is educated.
As an educator I find this model, at a glance, to be reassuring and confirmatory because it suggests that we do actually have a positive effect on our students. However, there may be a few grounds on which it might be challenged (disclaimer: this is speculation). The first and most obvious is that correlation does not equal causation. The fact that countries that do invest in improving education consistently see productivity gains to match in years to come is interesting, but it raises the question of what led to that investment in the first place and whether that might be the ultimate cause, not the education itself. A country that has invested in increasing the quality of education would, normally, be doing so as a result of values and circumstances that may lead to other consequences and/or be enabled by other things (such as rising prosperity, competition from elsewhere, a shift to more liberal values, and so on). The second objection might be that, sure, increased quality of education does lead to greater productivity, but that it is not the educational process that is causing it, as such. Perhaps, for instance, an increased focus on attainment raises aspirations. A further objection might be that the definition of ‘quality’ does not measure what they think it measures. A brief skim of the model used suggests that it makes extensive use of scores from the likes of TIMSS, PIRLS and PISA, standardized test approaches used to compare educational ‘effectiveness’ in different regions that embody quite a lot of biases, are often manipulated at a governmental level, and that, as I have mentioned once or twice before, are extremely dubious indicators of learning: in fact, even when they are not manipulated, they may indicate willingness to comply with the demands of the powerful more than learning (does that improve GDP? Probably). Another objection might be that absence of time spent in school does not equate to absence of education. Indeed, Hanushek and Woessman’s central thesis is that it is not the amount but the quality of schooling that matters, so it seems bizarre that they might fall back on quantifying learning by time spent in school. We know for sure that, though students may not have been conforming to curricula at the rate desired by schools and colleges, they have not stopped learning. In fact, in many ways and in many places, there are grounds to believe that there have been positive learning benefits: better family learning, more autonomy, more thoughtful pedagogies, more intentional learning community forming, and so on. Out of this may spring a renewed focus on how people learn and how best to support them, rather than maintaining a system that evolved in mediaeval times to support very different learning needs, and that is so solidly packed with counter technologies and so embedded in so many other systems that have nothing to do with learning that we have lost sight of the ones that actually matter. If education improves as a result, then (if it is true that better and more education improves the bottom line) we may even see gains in GDP. I expect that there are other reasons for doubt: I have only skimmed the surface of the possible concerns.
I may be wrong to be sceptical – in fairness, I have not read the many papers and books produced by Hanushek and Woessman on the subject, I am not an economist, nor do I have sufficient expertise (or interest) to analyze the regression model that they use. Perhaps they have fully addressed such concerns in that unpublished paper and the simplistic cause-effect prediction distorts their claims. But, knowing a little about complex adaptive systems, my main objection is that this is an entirely new context to which models that have worked before may no longer apply and that, even if they do, there are countless other factors that will affect the outcome in both positive and negative ways, so this is not so much a prediction as an observation about one small part of a small part of a much bigger emergent change that is quite unpredictable. I am extremely cautious at the best of times whenever I see people attempting to find simple causal linear relationships of this nature, especially when they are so precisely quantified, especially when past indicators are applied to something wholly novel that we have never seen before with such widespread effects, especially given the complex relationships at every level, from individual to national. I’m glad they are telling the story – it is an interesting one that no doubt contains grains of important truths – but it is just an informative story, not predictive science. The OECD has a bit of track record on this kind of misinterpretation, especially in education. This is the same organization that (laughably, if it weren’t so influential) claimed that educational technology in the classroom is bad for learning. There’s not a problem with the data collection or analysis, as such. The problem is with the predictions and recommendations drawn from it.
Beyond methodological worries, though, and even if their predictions about GDP are correct (I am pretty sure they are not – there are too many other factors at play, including huge ones like the destruction of the environment that makes the odd 1.5% seem like a drop in the barrel) then it might be a good thing. It might be that we are moving – rather reluctantly – into a world in which GDP serves as an even less effective measure of success than it already is. There are already plentiful reasons to find it wanting, from its poor consideration of ecological consequences to its wilful blindness to (and causal effect upon) inequalities, to its simple inadequacy to capture the complexity and richness of human culture and wealth. I am a huge fan of the state of Bhutan’s rejection of the GDP, that it has replaced with the GNH happiness index. The GNH makes far more sense, and is what has led Bhutan to be one of the only countries in the world to be carbon positive, as well as being (arguably but provably) one of the happiest countries in the world. What would you rather have, money (at least for a few, probably not you), or happiness and a sustainable future? For Bhutan, education is not for economic prosperity: it is about improving happiness, which includes good governance, sustainability, and preservation of (but not ossification of) culture.
Many educators – and I am very definitely one of them – share Bhutan’s perspective on education. I think that my customer is not the student, or a government, or companies, but society as a whole, and that education makes (or should make) for happier, safer, more inventive, more tolerant, more stable, more adaptive societies, as well as many other good things. It supports dynamic meta-stability and thus the evolution of culture. It is very easy to lose sight of that goal when we have to account to companies, governments, other institutions, and to so many more deeply entangled sets of people with very different agendas and values, not to mention our inevitable focus on the hard methods and tools of whatever it is that we are teaching, as well as the norms and regulations of wherever we teach it. But we should not ever forget why we are here. It is to make the world a better place, not just for our students but for everyone. Why else would we bother?
Originally posted at: https://landing.athabascau.ca/bookmarks/view/6578662/skills-lost-due-to-covid-19-school-closures-will-hit-economic-output-for-generations-hmmm | <urn:uuid:6fabe0d4-fe82-4ed3-a380-b2f31b0b8b25> | CC-MAIN-2024-10 | https://jondron.ca/tag/oecd/ | s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947475238.84/warc/CC-MAIN-20240301093751-20240301123751-00130.warc.gz | en | 0.971282 | 1,767 | 2.640625 | 3 |
At some stage of your growing journey you will be keen to try your hand at propagation. There are lots of reasons – perhaps someone you know has a lovely plant growing and you can’t find it in a nursery. Or perhaps you might be the one to have that lovely plant and would like to strike some cuttings to give away to a friend who has always admired it. Maybe you’d like to get a number of plants growing to save you money from buying them all – or – maybe you’re even just up for a challenge and want to try something new!
There are a few basic elements of plant reproduction that we need to look at – so Let’s talk about sex! Sexual reproduction of plants occurs when the female part of the flower is fertilised (through pollination) and the plant then sets seed. This seed contains the genetic material from both male and female flowers, so every seedling grown from these seeds will show characteristics of its lineage but will have individual variances. This occurs naturally, but can be exploited by human intervention and deliberate interbreeding. This is why we have such a wide range of variability within the same species of a plant.
This makes saving seed from your healthiest, best tasting and best performing vegetables a great thing to do. Plants evolve to be specifically suited to conditions and climate, so you can produce your own hardy and reliable strains of your favourite plants this way. Most annual plants reproduce sexually – so most vegetables fall into this category.
The other method of plant propagation is ‘asexual’. This involves making an exact duplicate of the parent plant’s genetic material by grafting, layering, cuttings or root division. Many perennial herbs and shrubs are propagated by at least one of these methods.
For home gardeners, saving your own seed, taking cuttings or dividing clumps of plants by the roots are the most common ways to propagate. These methods don’t require specialised or expensive equipment and can be achieved with a little practise and some trial and error.
Some plants strike best from ‘softwood’ cuttings (the soft, young growth tips of plants); ‘hardwood’ cuttings (more mature ‘branch’ material that is generally not flexible) or ‘semi-hardwood’ cuttings (somewhere in between the two – generally more ‘branch like’ in appearance, but still flexible). Again a good gardening book will have advice on whether to use soft/hard/semi-hardwood cuttings – and it can vary depending on time of year.
With all propagation methods, there are some ‘golden rules’ that will help you achieve success. (Step 1 shown at right)
As the saying goes, timing is everything. Many plants can be successfully propagated IF you take cuttings at the right time of year. A good gardening book will recommend the season to achieve the best strike rate. The same applies to germinating seeds. By all means, experiment and push the boundaries, but working with nature’s seasons will always bring you better results. Some gardeners also follow moon planting guides – and they swear by the results.
Plants have their optimal temperatures for seed germination or striking roots from cuttings. You can control the environment with greenhouses or heated beds. (Some ingenious ideas are out there for simple DIY versions, if you are looking at small scale home growing.) Ideal propagation temperatures quoted in books mean soil temperatures – which usually are lower than air temperatures. A good way to approach sowing seeds is to do a few at a time. By taking a ‘batch’ approach you are not gambling all your seeds at once if weather is changeable (as it often can be in early spring and autumn; the most common seasons for raising seedlings). (Step 3 shown at right)
Seed germination, and a successful cutting strike rate, rely on adequate moisture - not too little and not too much. Excess moisture (in the soil and in the air) is one of the major causes of fungal disease or ‘dampening off disease’. Watering by a misting system (or even a hand trigger bottle) is a good way to provide moisture in small amounts, but at regular intervals. Ensure good airflow, and make sure the water you use is good quality – excessive salt or minerals in water can affect results.
To avoid the spread of disease, always propagate from healthy plant material. Use a good quality seed raising or propagation mix. These are especially formulated to have the right balance of water holding and air flow, and to allow roots to easily penetrate between light particles. Keep secateurs sharp and clean. Wipe blades down with tea tree oil between uses, or if going from plant to plant to take cuttings. Wash seedling trays and pots before use – scrub them in water to remove built up dirt, then wash thoroughly in a diluted bleach solution (1:10 bleach to water) and allow to dry. (Step 4 shown at right)
Root forming hormones.
These are often used to encourage cuttings to strike roots more quickly. While not essential, some plants will have a much higher strike rate with their use. Available as powder, liquid or gel. (Some growers prefer one or the other for different uses – but if you are just starting out; use whatever you can find. I don’t think it’s critical.) Many organic growers like to use honey, which has anti-bacterial qualities to help heal cuttings.
Basic steps for taking cuttings:
• Step 1. Take tip cuttings allowing 4 – 6 nodes (leaf growth points) on each one.
• Step 2 Remove lower leaves with fine pointed secateurs or fingers – causing as little damage to the cutting as possible. Any wound can increase the chance of disease.
• Step 3. Trim back top leaf growth. Remove young leaves and cut older leaves back by roughly 50%. This reduces the amount of water lost through leaves by transpiration.
• Step 4. Dip bottom stem in rooting hormone.
• Step 5. Use a pencil to create a small hole in your pre-dampened propagation mix, about 3cms deep – enough to cover the two lower nodes well; but you don’t want to go too deep.
• Step 6. Firm around cutting with soil, and water with a gentle rose spray. (Step 6 shown at right)
• Step 7. Leave in a warm spot with good light but not direct sunlight and check daily. Water as needed – but probably at least once per day. Use a mister spray or watering can with a gentle rose.
Providing cuttings remain healthy looking, you can assume roots will be developing. You can lift up the seedling tray after a couple of weeks and if you’re lucky, you may see fine roots protruding. Otherwise give a cutting a gentle tug. If you feel resistance, you know roots have begun to form. Gentle investigation of the root formation will indicate when the cuttings are ready for transplant. (Root growth approx 4 weeks shown at right)
Often, top growth will be starting to appear on the cutting.
These photos are showing taking softwood cuttings from a Kangaroo Apple (Solanum aviculare). (Cutting at approx 9 weeks shown below) | <urn:uuid:7ec68790-5b2b-42cf-8d86-c0a1337b0e8a> | CC-MAIN-2019-35 | http://www.greenlifesoil.com/sustainable-gardening-tips/propagation-tips-tricks-grow-your-own-plants-for-free | s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027313996.39/warc/CC-MAIN-20190818185421-20190818211421-00468.warc.gz | en | 0.941872 | 1,547 | 2.828125 | 3 |
Located among longleaf pine and hardwood trees, low ridges, and broad floodplains, Tuskegee, Alabama, is a small town that’s been a big part of American history. Despite a modest population of less than 10,000 people, Tuskegee has been able to boast many notable residents who have made names for themselves in everything from sports to the arts. Among them have been the Tuskegee Airmen, the first African American Air Force unit, which served during World War II, and Rosa Parks, the icon of the civil rights movement, who sparked the Montgomery bus boycott in 1955.
The Tuskegee syphilis experiment, conducted from 1932 to 1972, examined the natural progression of untreated syphilis in poor, rural black men — without their informed consent.
Tuskegee, though, is also remembered for one of the worst chapters in the history of medical research. Forty years ago, in 1972, newspapers revealed the story of a syphilis study that was callous in its deception of research participants, and damaging, even today, in the distrust it sowed among black Americans. The study had started another 40 years prior, in 1932, when the United States Public Health Service (USPHS) needed to rescue a financially troubled syphilis intervention in Macon County, Alabama. The intervention was first established in partnership with a Chicago-based philanthropic organization, but its future was uncertain when the organization’s funds dried up during the Great Depression.
Syphilis, the sexually transmitted disease caused by the bacterium Treponema pallidum, was the subject of conflicting scientific hypotheses at the time, including the hypothesis that the disease behaved differently in blacks and whites. Interested in testing those hypotheses and faced with disappearing funds for treatment, the USPHS turned its project into a study of untreated syphilis. Also influencing the decision was the fact that the USPHS was discouraged by the low cure rate of the treatments at the time, mercury and bismuth. But by the mid-1940s, penicillin was in use as a proven treatment for syphilis. In spite of that medical advance, the USPHS withheld treatment from a total of 399 infected patients by the time the study ended in 1972.
What made the study especially insidious was its benign beginning, evolving from a partnership with an organization that promoted the health and welfare of black Americans. It thus benefited from the cooperation of black leaders at the Tuskegee Institute, as well as local church leaders, even as it turned into a deceptive study that gave syphilis victims the false impression that they were receiving treatment for their illness. When recruiters for the study announced that locals could receive free medical care, the response was overwhelming. Once recruited, those who had syphilis were told they were being treated for “bad blood,” a colloquial term for a variety of ailments. In truth, they received no proper treatment for their illness, so that investigators could study its progression.
In its beginning stages, syphilis causes sores and rashes around the mouth and pubic region, but left untreated, it can cause more serious complications and even death. Some of the participants in the Tuskegee syphilis study went blind. Others went insane. Estimates of the number who died from untreated syphilis range from 28 to 100.
The person who blew the whistle on the study was Peter Buxtun, an epidemiologist employed by the U.S. Public Health Service. Reflecting on the Tuskegee study years later, he said, “I didn’t want to believe it. This was the Public Health Service. We didn’t do things like that.” A front-page headline on the July 26, 1972, issue of the New York Times announced, “Syphilis Victims in U.S. Study Went Untreated for 40 Years.” On November 16, 1972 — 40 years ago tomorrow — a memorandum from Assistant Secretary of Health Merlin DuVal ordered the termination of the Tuskegee study. (Prior to serving as assistant secretary of health, DuVal was also the founding dean, in 1964, of the University of Arizona College of Medicine.)
In addition to terminating the study, the federal government began paying reparations to the victims and their families. In a presidential apology 25 years later, Bill Clinton admitted that “[t]he United States government did something that was wrong — deeply, profoundly, morally wrong.” In spite of those acts, the legacy of the Tuskegee study persists. Although distrust of the medical establishment existed among blacks long before the study, a common view among researchers and health care providers is that revelations of the study’s unethical practices cemented that distrust. Suspicions about childhood vaccines and AIDS research and treatment, among other resistance to medical care and research, have been attributed to the study’s legacy.
Exactly how much distrust the Tuskegee study has generated has been a topic of debate. Uncertainty lurks in sorting out the study itself from what historian Susan Reverby has called “the myriad other experiences of Black America with health care,” including experiences “lacking labels or formal recognition [that] become part of the reference to ‘Tuskegee.’” Reverby cites “day-to-day encounters by Black Americans in the arena of health care [that] reopen old wounds,” rendering the name Tuskegee a sort of shorthand for various causes of suspicion and unease.
Although ethical guidelines now prevent the kind of research that took place in Tuskegee, problems of unequal treatment linger in the medical establishment. Numerous studies have revealed that black, Latina/Latino, and other minority patients commonly receive a lower quality of service from health care providers compared to their white counterparts. Some of the disparities in the services they receive can be explained by differences in insurance coverage and financial resources, but disparities still exist even after accounting for those factors. Examples of lower quality of service include less aggressive treatment for heart conditions and a lower likelihood of receiving kidney transplants and the drug therapy commonly called the “AIDS cocktail.”
Among other possible causes, time constraints while interacting with patients can drive unequal service, leaving health care providers to fall back on stereotypes about a patient’s interest in self-care and willingness to follow a doctor’s recommendations. The resort to stereotypes can be made worse if the patient, lacking trust in the health care provider, withholds information during a check-up or other visit.
This entanglement with so many other issues and experiences, however, can ensure that the lessons from Tuskegee remain relevant as we confront ongoing problems of inequality in health care. Tuskegee can be the moment we remember to consider how far we’ve come and how far we still have to go. In the same way that the Stonewall riots have started conversations about homophobia and the 1973 Wounded Knee incident has started conversations about the grievances of Native Americans, Tuskegee can help us continue the conversation about health disparities.
The good news is that there are solutions. We can address disparities by ensuring that more minorities enter the health professions and more health professionals practice culturally competent health care. And the simple act of helping people become aware of their unconscious racism in a nonthreatening way can prompt them to act on their egalitarian ideals, regulating their interactions with people from other racial and ethnic groups. | <urn:uuid:bb44c6a9-21db-416b-a0e7-f9155aa2551c> | CC-MAIN-2017-13 | http://blog.advocatesaz.org/2012/11/15/i-didnt-want-to-believe-it-lessons-from-tuskegee-40-years-later/ | s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218189525.23/warc/CC-MAIN-20170322212949-00261-ip-10-233-31-227.ec2.internal.warc.gz | en | 0.96248 | 1,530 | 3.453125 | 3 |
The usage of ultrasound technology to break down fat cells beneath the skin is known as ultrasonic or ultrasound cavitation. It is a non-surgical treatment for cellulite and targeted fat. Ultrasonic vibrations are used to exert pressure on fat cells during this technique. The pressure is strong enough to cause the fat cells to dissolve into water. The body can then eliminate it as waste via your urine.
The destroyed fat cells are expelled as waste from the body via the liver. This treatment procedure is utilized in conjunction with other weight reduction treatments to aid in the removal of extra fat. It is a favored way of removing body fat over other invasive treatments. It is vital to realize that if you eat a high-calorie diet, your fat may return.
How Does It Operate?
Ultrasonic cavitation uses radio frequencies and minimal ultrasonic waves to shape the body. These waves cause bubbles all over the fat deposits beneath the skin. The bubbles then explode, breaking up the fat stores and allowing them to flow into the interstitial and lymphatic systems. Glycerol and independent fatty acids are formed from the fat deposits. Glycerol is then recycled by the organism, during that time free fatty acids are transported to the liver and expelled as waste.
How long does a treatment of ultrasonic cavitation last?
Because the technique is tailored to each individual’s needs, it may take longer for some than others. However, relying on the therapy, the typical treatment must be finished in one to three sessions, with two weeks between each session. Each session lasts around 45 to 75 mins. The effects of ultrasonic cavitation might be seen in 6 to 12 weeks.
Which regions of the body are ideal for ultrasonic cavitation?
Ultrasonic cavitation is more effective in areas with localized fat. The stomach, flanks, thighs, hips, and upper arms are examples of such places. This technique cannot be performed on bodily regions such as the head, neck, or other bone areas.
Best frequency for ultrasonic cavitation:
The ideal frequency for a radiofrequency cavitation machine range between (20kHz and 30kHz). Be wary of knock-off ultrasonic devices with a frequency of (40 to 60 kHz), since they are too insufficient to penetrate into the fat or generate enough disruption in the cell to cause cavitation. For safety concerns, many domestic versions operate at this frequency but yield poor or no results.
A widespread misconception about ultrasound is that the greater the frequency (40, 50, or 60 kHz), the heavier the signal. Nothing could be farther from the truth; in actuality, these vibrations are far weaker.
Here’s an easy example: if you clap your hands 25 times in a minute, you may be slow, solid, and loud. If you were ordered to clap your hands 40, 50, or 60 times in one minute, you would have to clap extremely quickly, which would cause you to clap lighter and quicker. A slower, heavier and far more penetrating frequency of 25 Khz is excellent for cavitation. | <urn:uuid:85061629-2c8a-4295-9704-ce3d1765a8ba> | CC-MAIN-2022-40 | https://www.articlemarketingnews.com/ultrasonic-cavitation/ | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335190.45/warc/CC-MAIN-20220928082743-20220928112743-00098.warc.gz | en | 0.935812 | 632 | 2.515625 | 3 |
Most everyone wants to live a healthy and happy life. However, many take no action in doing so. Living a healthy lifestyle isn’t just about staying fit and exercising here and there. It involves much more than that including cardio, strength training, and a healthy intake of fruits and vegetables.
To obtain a healthy cardiovascular lifestyle, you will want to exercise 3-5 days a week for approximately 40-60 minutes. Cardio can include walking, jogging, running, bicycling, or stair climbing. For most women, spending time at their local women’s gym partaking in zumba classes and exercise workout routines is their most preferred means of getting cardio routines in their daily regimen. Others opt to walk or run in place while watching their favorite TV show. The key is making it enjoyable so you don’t dread a work out.
If trying to shed those extra pounds, it’s important to recognize that you must burn more calories than you consume in any given day. Therefore, you want to keep track of your calorie intake to accurately track calories burned during a workout. Keep in mind that it takes 3,500 calories to burn one pound of fat.
In order to build and/or tone muscles, strength and weight training is needed. Starting small, with 5lb weights, doing 10-12 reps and 2-3 sets in any movement will help you flaunt your muscles during the warmer months. Some studies have indicated that for each pound gained in muscle, you will burn approximately 35-50 calories a day. That equates to 175-250 calories burned in a day’s time for an additional 5 pounds of muscle.
Starting healthy eating habits doesn’t have to be scary or overwhelming. The key to have a healthy diet is to have well-rounded nutritional foods incorporating several groups and not eliminating any of them. Living a healthy lifestyle is adapting the idea that it’s not just a diet, but a life change. Being a life change, don’t eliminate your favorite treats, as you won’t stay m motivated. It’s alright to include a piece of cake or ice-cream in your diet in moderation.
You will also want to intake fruits, vegetables, grains, and legumes that are high in complex carbohydrates, fiber, vitamins, and minerals. Other food groups can be filled from dairy products, lean meat and poultry, and fish. Low-fat options in these categories will also improve your diet.
The most important piece to living a healthy lifestyle is to first establish attainable goals while keeping informed on the most up-to-date fitness and diet information.
Lorna Horne is a freelance writer and blogger who provides health advice articles to one of the best Chicago gyms for women.
Latest posts by Karla Bond (see all)
- What To Do When Worried About A Friend’s Mental Health - August 9, 2017
- Recovery Aid: How To Speed Up Physical & Mental Healing - August 9, 2017
- How To Keep Your Body And Mind Healthy - August 9, 2017 | <urn:uuid:149b7f98-b228-4731-a4ea-1543c8ca20b3> | CC-MAIN-2017-34 | http://bondwithkarla.com/tips-for-a-healthier-lifesytle/ | s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886110774.86/warc/CC-MAIN-20170822123737-20170822143737-00071.warc.gz | en | 0.948945 | 636 | 2.640625 | 3 |
By Lon Maxwell, Reference Department
Good morning class, and welcome to Superhero 101. With the massive surge of movies, books, and television about and starring spandex clad gladiators from the last century we cannot help but look back to the origins of the archetypal superhero. Most modern comic book enthusiasts think of comics and their associated heroes as falling into the eras of The Gold, Silver, Bronze and Modern ages, with the superhero archetype we all think of (i.e. Superman, Wonder Woman, Captain America) beginning in the Golden age. While I agree with the ages and their application in the history of comics, I believe the heroes go back so much farther. I would go even as far as to say that our older heroes are still as popular now as they were in their nascent era. So let us begin learning how the heroes of humanity’s past are the heroes of today’s children.
Okay, that’s what it would say at the top of the syllabus if there was a university crazy enough to give me carte blanche to design a course of my choosing. I’m not sure what department would end up with a course like that; history, literature, and anthropology all have good claims on the subject matter. (I’d probably choose anthropology.) I started to think about this back in 2005, when another set of books came out claiming to be the next Harry Potter. It was something to do with Mount Olympus in New York and some unfortunately named kid. Percy Jackson brought Greek mythology back to the American consciousness with a vengeance. I remarked to a coworker in the children’s department that it was like someone had mixed Dauliere’s mythology with comic books, and then I realized there was nothing to mix, that the original sequential pictures were drawn on the side of black-figure pottery. The more I thought about it, the further back I could push that genesis moment in drawn super heroes, back past Homer, beyond Gilgamesh, back to the paintings in Chauvet and Lascaux and the Löwenmensch. Those giant figures on cave walls and anthropomorphized animals showed a belief in a being better than an average human, a super man.
The real origin we can trace the ideas back is to the stories that have come down to us along with artistic renderings. Gilgamesh is probably the earliest recorded super hero. He was stronger and braver and more cunning than an average person. This was because he was two thirds god (yeah, I can’t make the math on that work either), but he wasn’t a god himself. Even the Old Testament refers to a race of giants like Goliath, that were the children of fallen angels and human women, but they were not very heroic. Yet still that was the de facto origin story for most of the Stone Age and Classical Age heroes, some combination of divine ancestors mixed with human to make for an invulnerable hero (Achilles), a super strong one (Heracles), or some mix of characteristics (Theseus, Perseus, Etc. ). There are even examples of plain guys with nothing but their physical prowess and sharp wits like Batman, oh, sorry , I mean Odysseus. The superhero of today would fit fine in ancient Greece and Rome if he just swapped his tights and alien parents for a toga or chiton and a more deified lineage.
The medieval world and its dominating monotheistic religions brought an end to all this human/deity philandering. Heroes now were men and women who were blessed by God like Robin Hood, Pwyll of Dyfed, and King Arthur or sorcerers of sketchy origin like Merlin. Real life heroes began to be magnified to supernatural proportions. Joan of Arc, El Cid, Roland, Boadicea, and Charlemagne all have fantastic elements woven into their stories. Off in the cold north of Europe the Vikings still had the demigod heroes of the early sagas, but even these saw a Christianization as people adopted the religion but didn’t want to give up their old fireside stories. Hero tales are not the sole property of the west in the middle ages. Sinbad the mariner was sailing the Arabian Sea while the brothers of the peach orchard, Guan Yu, Liu Pei, and Zhang Fei, were fighting to unite China.
Since the Medieval era, we have been going through our past for inspiration. There have been resurgences in interest over and over in the classical mythos as well as the Arthurian legends. Scholars debate the historicity of Troy and Camelot. Writers like Tennyson and Keats borrowed the themes for new works. It wasn’t until early last century that we began something new. Superman, Captain Marvel, Captain America and Wonder Woman each debuted and added new heroes to our mythology. This coincided with a rise in science fiction stories in the popular publishing world. Now we have science fiction retellings of the Odyssey, movies of Sinbad, video Games of the Romance of the Three Kingdoms, and graphic novels that tell the 4000 year old story of Gilgamesh and Troy. Children today are learning the same lessons as the kids of millennia past from the same characters. We have made our own heroes but we have built them on a timeless framework that goes back to the beginning of humanity and we have brought along a best of collection of the heroes of the past.
Sources and Suggested Reading:
- The History of Art by H. W. Janson (709 JAN)
- Boys of Steel: The Creators of Superman by Marc Tyler Nobleman (J 741.5 NOB)
- D’aulaires’ Book of Greek Myths by Ingri D’Aulaire (J 292 DAU)
- The Epic of Gilgamesh by Kent H. Dixon (892.1 DIX)
- The Hero With a Thousand Faces by Joseph Campbell (201.3 CAM)
- Romance of the Three Kingdoms by Luo Guangzhong (895.13 LUO)
- The Song of Roland by Anonymous (YA 841.1 CHA)
By Lon Maxwell, Reference Department
Fairy tales come from many places; mythology, folk legends and even news headlines of the day. Hansel and Gretel may have hearkened back to the great famine of the fourteenth century when parents abandoned children and cannibalistic old ladies were not unheard of. The Pied Piper of Hamelin refers back to the children’s crusade when thousands of children left for the holy land to convert the Muslims. Cinderella has elements from the original King Leir folk tale (evil sisters who steal a throne) mixed with the myth of Rhodopis, a high class escort whose sandal is stolen by an eagle and dropped on a prince’s head causing him to search the kingdom for the owner of the mysterious footwear. Snow White and Rose Red hearkens back to the mythology of animals turning into gods. Snow White and her sister help a bear and an ungrateful dwarf. The dwarf tries to get the bear to eat the girls but is himself killed. The bear turns into a prince and the girls marry him and his brother respectively. Interestingly, in the original German folktales there are two different snow whites; Schneeweißchen has the sister, while Schneewittchen has the dwarves.
Time has removed the darker parts of many fairy tales. Many of the events of early versions of the fairy tales we know and love would be unfit for children (and some adults).
- In a very early version of Sleeping Beauty by Giambattista Basile the princess is raped by a king and then gives birth to twins that revive her. She tracks down the twins married father only to have his queen try to eat her babies. It’s all happily ever after though, the king has the queen burned alive for her attempted infanticide so he can marry sleeping beauty.
- The Grimm Version of Snow White is truly grim. The queen isn’t her step mother, it’s her mother. The prince finds her dead and she is woken when he is carting off her body and the poisoned apple falls from her mouth. I don’t care to speculate as to why he is carting off a beautiful, dead girl. Finally as punishment for what she’s done, the queen, who in this version asks the huntsman to bring her Snow’s liver and lungs to eat, is made to wear iron shoes that have been kept in a fire all day and dance until she dies.
- Wilhelm and Jacob Grimm don’t bother to take out the gory details of Cinderella. Their version has the step-sisters cutting off toes to fit their feet into the slipper, and when Ella is finally proven to be the prince’s one true love, they get their eyes pecked out by doves.
- In early versions of Little Red Riding Hood, prior to the polish and lightening of the Brothers Grimm, Red is fed bits of her grandmother before being eaten by the wolf. Oh, and there’s no passing woodsman to rescue her so she just gets eaten. She doesn’t learn her lesson, only the reader does.
- Ariel, not her name in the original Little Mermaid, didn’t always end up with Eric (not his name either). In the original Hans Christian Andersen version, the mermaid was given legs but every movement felt as if swords were impaling her extremities. As she truly loves the prince, she dances, despite the pain, to win his affection, but he marries a princess from the neighboring kingdom. In one last grasp at gore, the little mermaid’s sisters bring her a knife and tell her to kill the prince and let his blood drip on her feet so that she becomes a mermaid again. She declines and, brokenhearted, dissolves into sea foam. So much for the Disney ending.
According to a study by Durham University anthropologist, Dr. Jamie Tehrani, many of these tales are thousands of years old, going back to before the indo-european langauge family began to split. Tehrani believes that this is why so many of these tales are found in multiple cultures. But Fairy Tales are finding themselves pushed to the foreground once again. Television, film, books, and comics have all revived classic tales with new twists. Disney’s revived princesses are seeing a further recreation into live action movies and their show, Once Upon a Time, has brought these characters into the real-ish world of primetime soap operas. Bill Willingham’s Fables series has done something similar with the characters of our children’s stories living in modern Manhattan and a farm upstate for those less human and more anthropomorphic. New books are written retelling old tales all the time. Anne Rice, writing as A. N. Roquelaure, wrote a series of erotically charged sleeping beauty tales in the mid-1980s with a follow up that came out in 2015. Jasper Fforde turned the nursery rhymes into nursery crimes with his books The Big Over Easy and The Fourth Bear. Neil Gaiman has taken elements of fairy tales and made them even darker. Gregory Maguire’s Wicked went from best-selling novel to Broadway where it joined Into the Woods in modern retelling musical history. All of this shows the endurance these tales have and the future traction for their continued popularity.
Perhaps the most fascinating question of all this is, where our great x8 grandchildren will find their fairytales. Will their parents lull them to sleep with the tales of diminutive people trying to destroy a magic ring? Will their grandparents recall nights listening to the story of the beautiful girl who fell in love with the handsome vampire? Will their dreams be peppered with stories of magical children in a sorcerous school making the world safe for everyone? Histories suggests that they will; that the tales of our modern pop culture will traverse the ages, slightly bent, occasionally warped and find themselves sitting on the nightstands of children for generations to come, probably with some of the darkest parts edited out right next to the copies of Jack the Giant Slayer and Cinderella. | <urn:uuid:fe6e2301-328d-4317-8a01-01abb82630db> | CC-MAIN-2020-29 | https://wcpltn.wordpress.com/tag/folktales/ | s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593657140337.79/warc/CC-MAIN-20200712211314-20200713001314-00590.warc.gz | en | 0.958338 | 2,544 | 2.828125 | 3 |
a pentose sugar, a phosphate group, a nitrogen-containing base
Nucleotides are joint by phosphodiester bonds, phosphate group connect to the 3' C of one sugar, and the 5' C of the next sugar
Backbone made up of alternating pentose and phosphates
Nucleic acid strand has a polarity: a 5' end and a 3' end
amount of A = amount of T, G = C
key features of DNA
right-handed double-stranded helix, uniform diameter
antiparallel: two strands run in opposite directions
5' phosphate group and 3' hydroxyl group
complementary base pairing (A-T;C-G)
B-DNA consists of an double helix with approximately 10 bases per turn.
Found in nucleus (and mitochondria)
Coded instructions for making proteins
Weak bonds between bases(hydrogen bond)
Fatty acids-> Fats,Lipids,Membranes
Lipids are a broad group of compounds from biological origin that can dissolve in nonpolar solvents, such as chloroform and diethyl ether
Oily, greasy, waxy substances. Not water soluble. Extracted from organisms by organic solvents.
Even though they are different in structure (diff. types- fats,waxes,sterols,fat-soluble vitamins A D E K), they share some same properties
Lipids in our body
Free fatty acid
Glycogen in animals
glycogen used for short-term energy storage, they converted to glucose when energy is required.
Starch in plants
Lipids have more energy content per unit mass of carbohydrates
Lipids used for long-term energy storage
Triglycerides(甘油三酯) converted to fatty acids and glycerol → energy
Triglycerides are broken down to yield acetyl CoA. | <urn:uuid:bca73a54-8e17-4ff5-b884-c4b70358d366> | CC-MAIN-2022-21 | https://cheatography.com/rhettbro/cheat-sheets/dna-lipid/ | s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652663011588.83/warc/CC-MAIN-20220528000300-20220528030300-00574.warc.gz | en | 0.820844 | 533 | 3.78125 | 4 |
Using the empirical research article that your instructor approved in the Week 5 assignment, ask yourself: “Is this a quantitative research article or a qualitative research article?” Remember, in quantitative research, the emphasis is on measuring social phenomenon because it is assumed that everything can be observed, measured, and quantified. On the other hand, in qualitative research, it is assumed that social phenomenon cannot be easily reduced and broken down into concepts that can be measured and quantified. Instead, there may be different meanings to phenomenon and experiences. Often in qualitative research, researchers use interviews, focus groups and observations to gather data and then report their findings using words and quotations.
Consider how these different methods affect the sampling design and recruitment strategy, and ask yourself how the recruitment of research participants will affect the findings.
For this Assignment, submit a 3-4 page paper. Complete the following:
- Read your selected empirical research article, and identify whether the study is a quantitative or qualitative study. Justify the reasons why you believe it is a quantitative or qualitative study. (Your instructor will indicate to you if you are correct in identifying the research design. This will point you to whether you will use the “Quantitative Article and Review Critique” or the “Qualitative Article and Review Critique” guidelines for the final assignment in week 10.)
- Using the empirical research article, focus on the sampling method in the study and begin to evaluate the sampling method by answering the following:
- Describe the sampling methods in your own words (paraphrase, do not quote from the article).
- Describe the generalizability or the transferability of the research finding based on the sampling method.
- Discuss the limitations the article identified with the sample and how those limitations affect the reliability or credibility.
- Explain one recommendation you would make to improve the sampling plan of the study that would address these limitations in future research.
Below is the article that was chosen: | <urn:uuid:1753ab5d-aa23-4054-9148-f86b5c24563c> | CC-MAIN-2021-17 | https://writerbay.net/urgent-homework-help-27362/ | s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618039596883.98/warc/CC-MAIN-20210423161713-20210423191713-00631.warc.gz | en | 0.907241 | 402 | 2.953125 | 3 |
Today, we’re going to make the world less comfortable, in two easy steps that each of you can do at home. Step 1 shows how easy it is to account for the carbon dioxide excess in the atmosphere based on our cumulative use of fossil fuels. Step 2 bypasses intricacies of thermal radiation to put an approximate scale on the amount of heating we would expect the excess CO2 to produce. Serves 7 billion.
Climate Change in Context
I view climate change as a genuine challenge to the stability of our coexistence with the planet. But it is not my primary concern. A far more dangerous threat to the human endeavor is, in my mind, our reliance on finite resources and the difficulty our economic systems will have coping with a decline in the availability of cheap energy. That said, the issues are closely linked—through fossil fuels—and both benefit from a drive toward renewable resources.
Climate change is also important to the energy scene in that it provides a cautionary tale. Despite being a straightforward phenomenon (as this post will show), it has been incredibly easy for various interests to stir mud into the water and create substantial—and sometimes growing—doubt about the validity of climate change induced by human activity. Generally speaking, complex issues have enough loose threads that claws can easily grab hold and make a mess of things, distracting the public for decades about whether there is indeed a ball of yarn somewhere in the mess.
If we take decades to acknowledge en-masse that we have passed peak oil—whenever that happens (or happened)—and maybe really should make a long-term plan to mitigate the effects, we’re in trouble. Markets will exert powerful influences in the right direction, but perhaps not far enough in advance of a real problem to be effective, and the market “solutions” are almost certain to be expensive by today’s standards. In any case, the swirling controversy around climate change suggests that even if markets go into overdrive to address post-peak realities, there may not be much consensus as to why things are transpiring as they are. We are fond of narratives, and there will be an ample selection of compelling explanations from which to choose.
Why do we loves our fossil fuels? It’s not for their visual aesthetics, their pleasant smell, or their enduring companionship. We only like them for their convenient, cheap energy. The energy is unleashed via combustion with oxygen. The chemical reactions for complete combustion of coal (represented here as straight-up carbon), natural gas (predominately methane), and gasoline (typified by octane) are:
C + O2 → CO2 + 7.9 kcal
1g 2.7g 3.7g
CH4 + 2O2 → 2H2O + CO2 + 13 kcal
1g 4g 2.25g 2.75g
C8H18 + 12.5O2 → 9H2O + 8CO2 + 11.5 kcal
1g 3.51g 1.42g 3.09g
Each reaction has been scaled for one gram of input fuel. Notice that all three produce about 3 g of CO2 for every gram of input fuel. This 3:1 ratio applies to any mass/weight measure you care to use, and is easy to remember. Example: fill a car’s gas tank with 30 kg (65 lb; 40 ℓ; 10 gal) of fuel, and out will pop about 90 kg of CO2. Could you lift that much?
Note that the 7.9 kcal derived from 1 g of carbon is a bit higher than for actual coal, which contains some volatiles and non-combustible junk. The real range is 3.6 kcal/g for lignite to 6.5 kcal/g for the ever-rarer anthracite. But for climate change, we’re only concerned with the amount of CO2 released per unit of energy derived. The mass of baggage in coal is irrelevant for our purposes (other than to bring the amount of CO2 released per gram of actual coal closer to the magic 3:1 ratio).
The CO2 intensity of each of these sources can be approximated by the amount of CO2 produced per kilocalorie of energy delivered. These work out to 0.47, 0.21, and 0.27 g/kcal for coal, gas, and oil, respectively, using the numbers above. I have seen this measure reported elsewhere as 0.39, 0.22, and 0.29 g/kcal—presumably accounting for the full suite of reactions in the real substance, rather than the principal ones considered here. Still, not too bad for a simple treatment (which is thematic of Do the Math posts).
Fossil fuel energy accounts for roughly 85% of our 12 TW power production today, so let’s call it 10 TW from fossil fuels. This breaks up into approximately 3, 3, and 4 TW for coal, gas, and petroleum, respectively. This mixture results in a weighted carbon intensity of 0.31 g/kcal using our simple numbers, and 0.30 g/kcal using the alternate numbers. See—you worry too much: our simple numbers do just fine.
10 TW of fossil fuel power means 1013 J/s, or about 3×1020 Joules in a year (handy trick: one year is close to π×107 seconds), turning into 7.5×1016 kcal of fossil fuel energy used each year. Given our intensity, this results in the emission of 2.3×1013 kg of CO2 every year into our atmosphere. That’s 23 gigatons per year.
Big numbers can be hard to put in context. We have to compare to other relevant big numbers to make any sense of them. The atmosphere is vast and relevant, so we should compare to the total mass of the atmosphere.
How massive is our atmosphere? The basic technique is simple: take the atmospheric surface pressure (measured in pounds per square inch, Pascals, whatever), and multiply by the surface area of the Earth to get the weight of the atmosphere. Atmospheric pressure lands near the convenient number of 100,000 Pascals (Newtons per square meter). Multiply by the surface area of Earth (4πR², where R = 6378 km, yielding 5×1014 m²) to get 5×1019 Newtons of weight. Weight is just mass times acceleration due to gravity, or 9.8 m/s², which for our purposes is as close to 10 m/s² as we might hope to get! So we end up with 5×1018 kg of atmosphere.
Adding an annual deposit of 23×1012 kg of CO2 to something that’s 5×1018 kg mean’s we’ll change the composition by 23/5 parts per million (ppm), or 4.6 ppm each year.
We have one more wrinkle to iron out before we compare our result to measurements of CO2, because the measurements are usually expressed as parts per million by volume and not by mass, as we have calculated. Roughly three quarters of the atmosphere is made of N2, at 28 g/mol, and one quarter O2, at 32 g/mol, for a weighted average around 29 g/mol. Meanwhile, CO2 weighs in at 44 g/mol. Gases come to equilibrium occupying a volume proportional to the number of molecules present (as counted in moles, for instance), so that the volume concentration of CO2 is less than the mass concentration by the ratio 29/44. This adjustment puts us at a yearly volume contribution of 3 ppm.
How does this compare to observation? The famous “Keeling curve”—representing measurements of atmospheric carbon dioxide begun in 1958 from Mauna Loa by professor Keeling of the Scripps Institution of Oceanography—lately shows an annual rise of about 1.9 ppm per year. I am told that about half the CO2 released into the atmosphere is promptly absorbed in the oceans (leading to acidification), so our numbers are entirely consistent.
In case the details of the calculation have clouded the simplicity of what we have done so far, here is a summary. Based on the principal chemical reactions that are non-negotiable if we want energy out of fossil fuels, we must get a predictable amount of CO2 out of the reaction. We know how much energy we get each year from fossil fuels, and the simple process of totaling up the resulting CO2 and relating this to the total atmosphere yielded an estimate right in line with observations. The origin of excess CO2 in our atmosphere is not mysterious.
Total Contribution to Date
It is also worth verifying that the total observed rise in atmospheric CO2 concentration is consistent with the total amount of fossil fuels we have burned to date. Pre-industrial CO2 measured about 280 ppm by volume, while today’s figure pushes 390 ppm. Is the excess 110 ppm reasonable?
To date, we have used about one trillion barrels of oil worldwide, or 140 gigatons (Gt). Meanwhile, scaling U.S. consumption suggests that the world has used about 200 Gt of coal, and 70 Gt of natural gas. The total is approximately 400 Gt of fossil fuels. Applying our handy 3:1 CO2 to fossil fuel input mass ratio, we expect about 1200 Gt of CO2 emission into the atmosphere, or about 600 Gt once half of it is slurped into the ocean. Recalling that the atmosphere is 5×1018 kg, or 5 million Gt, our 600 Gt of CO2 amounts to 120 ppm by mass, or 80 ppm by volume.
Does 80 ppm equal our target 110 ppm? You betcha! How can I violate mathematics this way (did I flunk it in school)? Because we have a variety of imprecise numbers feeding the problem: 50% absorption by the ocean, a round 400 Gt of fossil fuels to date, etc. Coming up bang on the target under such circumstances should arouse more suspicion than being a little off.
So the point is that both the yearly and total observed CO2 increases can be attributed to the amount of fossil fuels we have used historically. Turned the other way, we can produce a straightforward prediction of how much CO2 to expect in the atmosphere based on the simple chemistry of fossil fuel use—a prediction that is borne out by measurements.
A Blanket in the Atmosphere
Now part two of the recipe: how hot will the extra CO2 make us? Most physics students, once they learn about radiative heat transfer (affectionately called sigma-T-to-the-fourth), are tasked with calculating the Earth’s temperature in radiative equilibrium with the Sun. If done “correctly,” the answer is disappointingly cold because the greenhouse effect is not incorporated in the simple calculation.
The way it works is, the sun imbues a radiative flux of 1370 Watts per square meter at the position of the Earth. Given its radius of R = 6378 km, the Earth intercepts 1370 W/m² × πR² of the incident sunlight, since the Earth appears as a projected disk to the Sun. Most of this incident flux is absorbed in the oceans, land, atmosphere, and clouds, while the remainder is immediately reflected back to space so the aliens can see our planet. The absorbed part (70%) heats the earth surface environment and eventually is re-radiated to space as thermal infrared radiation, at wavelengths centered at about 10 microns—far beyond human vision (0.4 to 0.7 microns).
The law for thermal radiation is that a surface emits a total radiative power of A·σT4, where A is the surface area, σ=5.67×10−8 W/m²/K4 is the Stefan-Boltzmann constant, and T is the surface temperature in Kelvin. For instance, a patch of Earth at the average surface temperature of 288 K (15°C, or 59°F) emits 390 W/m² of infrared radiation. To figure out the temperature of the Earth, we demand that power in equals power out, and radiative transfer is the only game in town for getting heat on and off the Earth. If we did not have a balance between power in and power out, the Earth’s temperature would change until equilibrium was re-established. Hey—that’s what global warming is doing. But let’s not get ahead of ourselves…
While the Earth intercepts a column of light from the sun with area πR², the Earth has a surface area of 4πR² to radiate. Considering that 70% of the incoming sunlight is in play, we have an effective influx of 960 W/m² onto one quarter of the Earth’s surface area (why not half? much of the Sun-side of the Earth is tilted to the sun and does not receive direct, overhead sunlight). So the radiated part must work out to 240 W/m², which implies an effective temperature of 255 K, or a bone-chilling −18°C (about 0°F). Incidentally, if the Earth were black as coal, absorbing all incident solar radiation, the answer would have been a more satisfactory 279 K, or 6°C, but still colder than observed.
We know that 255 K is the wrong answer; off by 33°C. The discrepancy is the greenhouse effect, and to this we owe our comfort and our liquid oceans. The greenhouse gases absorb some of the outbound infrared radiation and re-radiate in all directions, sending some of the energy back toward Earth. Two-thirds of the effect (about 22°C) is from water vapor, about one-fifth (~7°C) is from carbon dioxide, and the remaining 15% is from a mix of other gases, including methane.
One can see from the absorption figure that water vapor is responsible for the lion’s share of the infrared absorption at relevant wavelengths (under the blue curve), but that the CO2 absorption feature from 13–17 microns also eats some of the spectrum. A crude assessment tells me that the spectrally-weighted water absorption across the outgoing wavelength range is approximately three times as significant as the CO2 absorption feature, reassuringly in line with the 22:7 ratio.
Crudely speaking, if CO2 is responsible for 7 of the 33 degrees of the greenhouse effect, we can easily predict the equilibrium consequences of an increase in CO2. We have so far increased the concentration of CO2 from 280 ppm to 390 ppm, or about 40%. Since I have some ambiguity about whether the 7 K contribution to the surface temperature is based on the current CO2 concentration or the pre-industrial figure, we’ll look at it both ways and see it doesn’t matter much at this level of analysis. If CO2 increased the pre-industrial surface temperature by 7 K, then adding 40% more CO2 would increase the temperature by 7×0.4 = 2.8 K. If we instead say that 7 K is the current CO2 contribution, the associated increase is 7−7/1.4 = 2 K. Either way, the increase is in line with estimates of warming—though the system has a lag due to the heat capacity of oceans, slowing down the rate of temperature increase.
Keep in mind that these figures are based on today’s CO2 concentrations, not the impact of continuing to burn vast amounts of fossil fuels. We have spent about half our total conventional petroleum, and less than half of our total fossil fuel deposits. Thus the ultimate temperature climb could be well over 5 K (9°F) if we continue our practices unabated.
Using a linear relationship between CO2 and temperature change does not constitute a correct treatment, and would fail miserably for large adjustments to CO2 (like a factor of 2 or 3). But for the 40% change under consideration, it captures the direction and approximate magnitude of the effect reasonably well, which is the strength of the estimation approach: get the essential behavior without the burden of unnecessary complexity. A real treatment would acknowledge the saturated nature of the 15 micron absorption feature and use ΔT = C·ln(390/280), where ln() is the natural logarithm function, and C≈2.9–6.5 K according to the IPCC. This leads to an expected increase of 1–2 K at today’s excess concentration. But the point is already made without the fancy pants.
That Warm Fuzzy Feeling
This simple approach ignores a host of nonlinear feedbacks in the system that could either suppress or amplify the warming effect. But don’t let the wrinkles obscure the big picture. The chemistry and physics needed to understand that anthropogenic climate change is an expected phenomenon are not that hard. In fact, it would be difficult to make the case that burning fossil fuels on the scale that we have done should not increase CO2 concentrations by something on the order of 100 ppm total and recently 2 ppm/yr. Likewise, given the observation of increased CO2 levels in the atmosphere and our understanding of the role that CO2 plays in the thermal blanket, it would be hard to argue why the surface temperature should not rise by a few degrees Celsius.
But before anyone gets the impression that I claim this method offers irrefutable proof that climate change is real and expected, let me clarify that such a cursory analysis cannot constitute an airtight case. Leave that to the IPCC (which has, incidentally, concluded that anthropogenic global warming is an airtight case). But I find it personally persuasive, understanding the broad outline of the problem. Such simple analysis makes it easy to adopt a default position that climate change is real and expected. It would take a lot of convincing evidence of complex masking (conspiratorial?) mechanisms to destroy these pillars. Not impossible, but not easily done.
Now, did anyone save room for dessert? | <urn:uuid:24b39db0-a415-4f75-8918-2093487860a3> | CC-MAIN-2015-48 | http://physics.ucsd.edu/do-the-math/2011/08/recipe-for-climate-change/ | s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398446535.72/warc/CC-MAIN-20151124205406-00287-ip-10-71-132-137.ec2.internal.warc.gz | en | 0.931792 | 3,760 | 2.765625 | 3 |
Sorry, friend, but you are confusing the verb "to lay" with the verb "to lie." To lay is a transitive verb, meaning to put something down, and requires a direct object, as in, the chicken lays a egg. To lie, meaning how something is situated, is an intransitive verb and does not use a direct object. The proper usage here would be, "...where do the groceries lie?" Confusing lie and lay is probably the most common English usage error to be found today. But better usage here would be simply... where are the groceries?
[Original obsolete] I think it should be "Where do the groceries lay".
[EDIT:] The original post was edited, so my comment makes no sense now. Also, I got it completely wrong and used "lay" as if it were the present tense of "to lie down", which it is not and is completely wrong.
Depends on the situation. In general, both phrases are interchangeable, but "Где продукты?" has a wider meaning. For example, if you came home and found all food are eaten (by somebody), you can wonder "где продукты?", or "где все продукты?". But if you just can't find them, you also might ask "где продукты?", or "где лежат продукты"?
I agree with olimo. In this context 'goods' and 'products' are not interchangeable. 'Goods' are more like "товары", "продукция" (which is different from "products"), "материалы".
In other words, in Russian if the word "Продукт" is not specified (i.e. "продукт производства", "продукт переработки" etc), it's almost always a grocery product.
Opposite to "produce" (as a noun), "merchandise" is more broad than "groceries". "Groceries" refers to food items while "merchandise" includes goods that are not food items (such as clothing or electronics). Also, "merchandise" isn't really used outside of a business context, and not generally when someone is speaking of their shopping.
For the nominative plural, words ending in a consonant (masculine) most commonly take ы to be plural.
Words ending in a consonant can also take а or я, but I have in my cheater chart, "Less common." As far as I know, I have yet to encounter a masculine consonant-end noun in the Duolingo course which has taken а or я to be plural.
However, words ending in г, к, х, ж, ч, ш, or щ always take и to be plural.
Do not worry, you will certainly encounter masculine nouns taking a stressed -а/-я as the ending. From the top of my head, I can name a few:
- дом → дома́ "houses
- а́дрес → адреса́ "addresses"
- лес → леса́ "forests"
- учи́тель → учителя́ "teachers"
- до́ктор → доктора́ "doctors"
- профе́ссор → профессора́ "professors"
- про́вод → провода́ "wires"
- по́езд → поезда́ "trains"
- дире́ктор → директора́ "directors, heads of companies"
- глаз → глаза́ "eyes", рог → рога́ "horns", берег→берега́ "shores, banks" (has to do with former dual number forms)
In professional speech, ве́ктор→вектора́ is common (for science guys), you can encounter до́говор→договора́ (in the speech of lawyers and accountants; до́говор is itself non-standard—догово́р is the usual form people use). Also, бухгалтер→бухгалтера́, тре́нер→тренера́.
Much as I know what the standard form is, it is hard for me to use векторы as the plural because I never heard anyone use it—and I heard so many who use вектора!
Друг, брат, стул, муж, сын are sort of irregular and also end in -я in plural: друзья́, бра́тья, сту́лья, мужья́, сыновья́ (note that the stress is not on the ending for some of them).
How is, "Where lie the groceries?" wrong
There is nothing technically incorrect with the way you have phrased it. However, it does sound quite archaic, so I would not expect Duolingo to ever accept it.
When forming a question about the location of something or someone, I would recommend something along the lines of, "Where is x [lying]? Where are y [lying]?"
When forming a question about the current state of an object or person (present continuous), I notice that in English, we frequently use "is" and "are" as helping-verbs to the main verb of the sentence. When forming a question about something that habitually occurs (present simple), we frequently use "do" or "does" as helping verbs.
- Present continuous: Where are they going [right now]? Present simple: Where do they go on Mondays/Tuesdays/holidays? Archaic-sounding: Where go they?
- Present continuous: Who are they talking to [right now]? Present simple: Who do they talk to in IT [when they have a question about how to reboot their computers before calling IT]? Archaic-sounding: several possible versions including, Who speak they to?
- Present continuous: Why is she singing [right now while I'm eating?] Present simple: Why does she sing [so often - is she preparing for a career in music]? Archaic-sounding: I can't even mentally form this without disused versions of "you" and "sing."
There will of course be exceptions - sometimes we use "do" and "does" when we refer to something that is occurring or could be hypothetically occurring in this present moment, such as with thought.
"I don't love you anymore." "Gasp! What do you mean?" (Note - I myself would never use the present-continuous verb "meaning" - "What are you meaning?" - this sounds very strange to my ear, and I am struggling to think of an instance where I would ever say that, even when speaking in a rush. For whatever reason, "What do you mean?" is the best-sounding use of this particular verb in a question-sentence. I reserve the word "meaning" for use as a noun, as in, "The meaning of the Russian word щи in English is pure, unadulterated joy in a bowl.")
"What does Mom think about hang-gliding from the kitchen roof? Would I be in trouble?" [Answer for the curious: yes, and she will tell Dad. :( ]
Because this is not the way a native speaker of English would say it. I know, you are working on learning Russian, you aren't working on your English right now. So it doesn't feel fair that you got marked wrong for a problem with your English grammar when you understood the meaning of the Russian sentence. But the staff at Duolingo have to enter every alternative sentence into the program database in order for the sentences to be accepted. And no one is going to enter sentences that aren't good English.
The verbs стоять and лежать are literally "stand" and "lie".
Стоит is generally used for "standing", for vertical position (including an object leaning against something). It is also used for stable, upright position if an objects is designed to have one—basically, if it has a base or legs (e.g., chairs, plates, cups, microphone stands, boxes). So a plate can "stand" on the table and also "stand" vertically in a cupboard.
Лежит is used for "lying", for upset or random orientation. If an object has a stable "preferred" upright orientation but is positioned otherwise, we also use "лежит". For example, a plate or a bowl put upside down can be described as "тарелка/миска лежит" (plates are rather stable even upside down but aren't indended to be used that way). A book can "lie" flat on a desk or "stand" vertically on a shelf.
The verb лежать is often used for objects "kept" somewhere (though, upright orientation will trigger "стоять" for things like milk or cups).
How would you say "Where are there groceries?" Is that a synonym? Maybe it's just from the "there is" assumption, but they seem similar, though with slightly different context. Without articles, it's difficult for me to know (yet I suppose) when things are talking about specifics (e.g. "the groceries I know of") and something more open ended ("where can I find groceries?")
I grew up in England. "Groceries" was the word we used when exclusively food shopping was meant. "The shopping" refers to the bags resulting from any shopping trip, and could, for example, be bags of clothes. In Scotland, the commonest word for the grocery shopping is "the messages". However I have never heard the word "messages" used in that way outside Scotland.
Он лежит (singular)
Они лежат (plural)
See the complete conjugation below: | <urn:uuid:8c00c6d9-f43b-43dc-8aea-3dbdcfc7da2e> | CC-MAIN-2021-10 | https://forum.duolingo.com/comment/11564984/%D0%93%D0%B4%D0%B5-%D0%BB%D0%B5%D0%B6%D0%B0%D1%82-%D0%BF%D1%80%D0%BE%D0%B4%D1%83%D0%BA%D1%82%D1%8B | s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178366959.54/warc/CC-MAIN-20210303104028-20210303134028-00205.warc.gz | en | 0.927747 | 2,488 | 2.6875 | 3 |
Moldova Worksheet – FREE Printable Word Search Games Earth Science for Kids
Moldova Worksheet – Get our interesting FREE Printable Word Search Games Earth Science for Kids.
This FREE worksheet about Moldova is composed of a fun word seek game as well as a find-the-missing words game for kids.
This fun printable Earth Science worksheet on Moldova is truly FREE for parents and teachers to download and use gratis and you may use our activity sheet as many times as you wish at home or in school!
The word search game on Moldova is a great way to keep kids interested and actively engaged while taking part in a science enrichment class, homeschooling, distant learning lessons, regular school science classes or while kids undergo early learning activities.
Our FREE Moldova printable word search for kids is perfect for Grammar school kids who are in First Grade to Fifth Grades. Additionally, Pre-K kids will also enjoy this free Moldova worksheet for kids.
Kids can have fun while learning fun facts on Moldova with this activity sheet while doing word searches activity.
Your children will enjoy to learn fun facts all about Moldova while playing the find-a-word game.
Kids in Grammar school from Grade 1 to 5 can use this Moldova worksheet as a reading comprehension tool.
Your children will learn very well as they will very likely have to re-read several times the Moldova fun facts to identify the missing words. This educational words game isn’t merely a good way to increase their science knowledge, but also enhances the ability to remember what they read and also improves their reading comprehension skills.
Pre-K and Kindergarten kids who can’t read and write can use the free fun facts Moldova worksheet as an exciting listening comprehension activity exercise.
Teachers can read out loud the fun facts about Moldova. Next, they assist the kids to recall what were the missing words. And then, kids learn to recognize words and find them in the Moldova hidden words puzzle.
Parents doing homeschooling activities with their kids can use our Moldova worksheet to teach their kids fun science facts about Moldova.
Teachers and private tutors are free to use our FREE Moldova worksheet to add to the traditional science classes at school and interest children in learning all about Moldova. Most beneficial is for you to make use of our free earth science worksheet for children on Moldova alongside with the free interactive online quiz with score on Moldova.
Our science for kids website offers hundreds and hundreds of FREE printable fun earth science worksheets for kids and hidden-missing-word search games. Download and use our fun science word puzzles to make learning science fun for your children!
What do you know about Moldova? What is the history of Moldova? What is the country’s capital city? How many people live in this country? What is the main religion of the people in Moldova?
Learn more easy science facts about Moldova by downloading our free Moldova worksheet for children!
FREE Fun Facts About Moldova Worksheet for Kids
[sociallocker]Download our FREE Moldova word search game for kids.[/sociallocker]
Cite This Page
You may cut-and-paste the below MLA and APA citation examples:
MLA Style Citation
Declan, Tobin. " Moldova Worksheet - FREE Printable Word Search Games Earth Science for Kids ." Easy Science for Kids, May 2019. Web. 25 May 2019. < https://easyscienceforkids.com/moldova-worksheet-free-printable-word-search-games-earth-science-for-kids/ >.
APA Style Citation
Tobin, Declan. (2019). Moldova Worksheet - FREE Printable Word Search Games Earth Science for Kids. Easy Science for Kids. Retrieved from https://easyscienceforkids.com/moldova-worksheet-free-printable-word-search-games-earth-science-for-kids/
Sponsored Links : | <urn:uuid:993839dc-f7fb-4caa-baff-74921d679ff2> | CC-MAIN-2019-22 | https://easyscienceforkids.com/moldova-worksheet-free-printable-word-search-games-earth-science-for-kids/ | s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232257889.72/warc/CC-MAIN-20190525044705-20190525070705-00353.warc.gz | en | 0.897139 | 852 | 3.515625 | 4 |
Back pain all over the globe is one of the most common aches. It is believed that nearly 80% adults suffer from back pain at least once in their lifetime. It is also a difficult symptom to analyze for an exact cause as so many conditions can lead to back pain. However, studies have are indicating that obesity is a risk factor for a chronic and debilitating back pain. Much like back pain, obesity has also become all too common, especially in developed nations. According to the American Obesity Society, about 64.5% of American adults are obese or overweight, which is an alarming statistics in itself.Obesity by itself is not a disease but can lead to many health issues such as diabetes, cardiac disorders, stroke, osteoarthritis, colon cancer, spinal stenosis, hypertension and several more.
Both obesity and back pain make life difficult not just for the patients but also put strains on the healthcare system through added cost for managing chronic diseases. Back pain is also a leading cause of work-related absence. Back pain can affect any age group but is more prevalent between the ages of 30-35 years. According to the researchers, back pain is linked to how well our back muscles, bones, and ligaments function together. Acute back pains are not severe and sometimes disappear without any treatment. However, a chronic pain which is left untreated can seriously impact the quality of life.
How can obesity impact our spine?
Our spine is made of 33 vertebral bones, connected and aligned by the ligaments and muscles. The spine is the prime support of our body, which allows us to bend, twist, and stand upright. The spine is designed in such a way that it can carry the weight up to a certain limit. However, when you carry excess weight, it puts a strain on the spine, impacting its structural integrity and damaging the surrounding structures like nerves, tissues, and capsule. Low back or lumbar region is the most common part of the spine which gets affected by the obesity. So, to live an active life without back pain, it is vital to maintaining a proper body weight to keep the spine healthy.
What is the relation between obesity and low back pain?
Obesity is proven to be a contributing factor in developing the musculoskeletal pain, especially in the knee, hips, and feet. Increased weight, particularly in the upper body, exerts a higher mechanical load or pressure on the weight bearing structures and joints. As a result, it leads to excessive wear and tear, increasing the rate of degeneration. According to a report published in the Journal of Obesity, the patients with increased BMI (body mass index) are more prone to have lower limb injuries than the patients with normal body weight. That is why obese patients are always advised to reduce weight during the spinal checkup or consultations. The benefits of reducing weight are as follows:
- Reduces the mechanical load on the affected structures of the spine
- Decrease the obesity-induced spinal curvature at the lumbar area
So, how will you know whether you are obese or not? The easiest way to measure the obesity is to calculate the BMI, by taking one’s height and weight into account. According to the WHO, the ranges of BMI are as follows:
- For normal weight: BMI is between 18.5 to 24.9
- Overweight: BMI is between 25 to 29.5
- Obesity: BMI is 30 or higher
In some cases, BMI is not a perfect tool to measure obesity, because it does not assess the body fat. For example, an athlete or muscleman may have a high BMI, in spite of not having too much fat. However, according to many researchers, BMI is the standard method measuring body fats in the normal people. Apart from that, many clinicians also use BMI to screen the patients who are at higher risk for obesity-related diseases.
In a study, the researchers were able to find the causal relationship between obesity and back pain. They discovered that the people with higher BMI were more likely to have low back pain. It means, overweight or obesity incredibly increases the risk of back pain. In another study, the researchers found that the obese people, who have low back pain, may have the highest level of inflammation. It is because of the change of the metabolic syndrome.
Interestingly, some other researchers discovered that obesity could also impact the outcome of the treatments of the back pain. They found that conservative treatments and physiotherapy did not work well in the obese/overweight patients. Apart from that, the chances of the infection and revision surgeries are higher in the obese patient who underwent lumbar surgery.
How is obesity linked to muscle weakness in lower back pain?
Many orthopedists say that low back pain mainly occurs due to the muscular weakness, especially in the abdominal area, and the lack of flexibility in the legs and back. Usually, obese people are not physically active, because of their overweight condition. It results in the accumulation of the extra body fats and also restricts the range of motion around the joint areas. So, gradually, the muscles of the abdominal areas get replaced with the fat cells.
Usually, the doctors prescribe physiotherapy and exercises to strengthen the abdominal and back muscles to increase joint flexibility, which helps to prevent and rehabilitate the severe type of low back pain. Maintaining the daily activities with tolerable limit can also contribute to achieving a faster recovery in acute back pain, instead of taking bed rest.
Considering all the risks that obesity brings to an individual, it is important to figure out ways for gradually shedding those extra pounds through an approach that is sustainable. Obesity can happen due to many reasons, so it is important to check if any underlying cause such as hormonal imbalance or any other curable condition is causing you to put on the extra weight. With the help of a dietician, a diet can be planned which in combination with exercises can help you achieve an optimum weight and lead an active lifestyle.
Have a question?
If for some reason an experienced doctor is not available around you, then you can contact us here.
Dr. Kaleem Mohammed graduated as a Bachelor of Physiotherapy in 2014 from Deccan College of Physiotherapy, affiliated to Dr. N.T.R. University of Health Sciences, Vijayawada, India. Dr. Kaleem is an expert at handling physiotherapy needs of patients suffering from orthopedic and spinal conditions and post-surgery rehabilitation. Dr. Kaleem is associated with HealthClues since its inception where he facilitates diagnosis and advanced consultation with senior doctors. He is also a medical researcher and prolific writer who loves sharing insightful commentaries and useful tips to educate the patient community about fitness, treatment options, and post-treatment recovery. | <urn:uuid:9636193c-3330-4a69-a424-0f94d58efdf6> | CC-MAIN-2020-50 | https://www.healthclues.net/blog/en/obesity-leads-back-pain/ | s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141692985.63/warc/CC-MAIN-20201202052413-20201202082413-00123.warc.gz | en | 0.961032 | 1,385 | 3.09375 | 3 |
THE WICKED AGE
MIDDLE ENGLISH COMPLAINT LITERATURE IN TRANSLATION
Readers of Chaucer may be only vaguely aware, if at all, of the body of complaint literature that stands beside and behind his works, some of which critique society in a tone less jovial than the satiric portraits in the General Prologue. Many authors of complaint literature are more acrimonious in their expression of outrage over what they perceived as a corrupt culture. This is not a unique phenomenon but hardly surprising during the fourteenth century, which was beset by famine, livestock disease, bad weather and other natural disasters, plague, political instability so severe that two kings were deposed, civil war and uprisings, war with France, Scotland and Wales, religious schism, the rise of a money economy, and shifting class structures. The picture painted by moralists is full of gloom and bereft of glory.
Yet it was a time of literary flourishing, especially during the Ricardian age, though criticism crept into every genre. Because of the diversity of forms in which complaint is voiced, this collection is divided into two sections; though the literature of both is covered under the umbrella of complaint literature, it is distinctive enough to warrant separate study. The first, Literature of the Estates, probably the most well known primarily because of The Canterbury Tales, presents a broad view of society and its many ills. The second section, Literature of Complaint, includes works commonly categorized as complaint or “Abuses of the Age” literature, such as political and protest poems and songs, which tend to be shorter than estates literature and focus on single issues. Also included in this section are other forms in which social criticism is expressed, such as sermons, narratives, treatises, poems, lyrics, ballads, drama, letters, and even romances.
The poems speak for themselves, and themes are common to many. Authors use conventional motifs, forms and phrases, which was de rigeur (plagiarism was an unknown concept), but as will be seen, convention does not stifle distinct authorial voice. While the poems follow tradition, their specificity is unique in the fourteenth century compared to earlier periods, which argues against simple universality. References to historical events and persons are verified in other literature and official documents, so that the works reflect reality rather than rhetorical commonplace. Since the literature is often historically and culturally specific, following is a brief overview of the world in which it was created, with a concentration on events, conditions and structures relevant to the literature in the collection; the individual sections and works are prefaced with more detailed background material and commentary.
The Rural Environment
The majority of fourteenth-century England was under agricultural or livestock production. Aside from isolated farmsteads and hamlets, the nucleus of rural life was the manor. The common conception, perhaps due partly to the literature, is of a secular lord of the manor, though lands were also held by the church; however, there was little difference between the two types of landholdings in practice.
The manor was supported by villeins, whose status varied from free to unfree, the former having fewer obligations to the lord than the latter. Some of the manor lands were kept by the lord (the demesne) and worked by those owing servile labor as part of their landholding status (serfs), although this shifted during the later part of the century when lords leased out their lands for income to meet changing post-Plague economic demands. The villeins also had land of their own, some very little (smallholders or cottars) and some large, so that there was economic diversity between tenants.
Unfree tenants owed servile labor to the lord, as well as fees for many activities and situations, and fines for violation of manorial rules and domestic customs, and there were a number of officials who oversaw various manorial operations. The manor had its own court for internal affairs. It was usually overseen by the lord or a high-ranking official like the steward, who oversaw the administration of the manor. Villeins acted as jurors, guarantors, and witnesses, and had some other opportunities to participate in their community, such as having input on the bylaws that regulated village life. They were represented by the reeve, whom they chose from their midst to act on their behalf.
Not all villeins were agricultural workers. There were craftsmen of many kinds who served village needs, particularly building specialties. Some smallholders hired out to richer peasants to augment their income, while those with larger landholdings could sell surplus produce at market. One popular village supplier was the alewife, as the beverage was a dietary staple; peasant fare consisted mainly of grains, dairy and produce but was light on meat, while the menu was reversed for the aristocracy.
The Urban Environment
Manors dotted the landscape, which had nonarable regions, woodlands, rivers and streams in addition to agricultural areas. Villagers who went to a town or city in search of work often didn’t have far to go; urban and rural areas were frequently separated from each other only by the city walls, outside of which were clustered cottages of the poor. Cities depended on agricultural production for food, raw manufacturing and trading materials, and crafted goods. Urban centers and rural communities were also connected through property ownership. Wealthy town dwellers might hold rural property, and country gentry might own town houses.
Urban populations ranged from approximately under 2,000 to over 10,000, and living standards varied greatly. The majority of the population consisted of a wage-earning workforce of craftsmen, laborers and tradespeople, and servants. Housing was often crowded, particularly in lower class areas; the more spacious homes of the wealthy tended to be located in the center of town. Most residences, rich or poor, had some sort of garden and yard, perhaps with livestock. Urban centers were frequently divided into districts by craft and trades, and streets were lined with workshops which often had living quarters above, shops, taverns, and sellers of foodstuffs by both suppliers and street vendors. There would also be churches, abbeys, hospitals,1 and alehouses, public buildings, guild halls, and open spaces for fairs, civic games and entertainment.
Towns sometimes grew from villages or were established around centers such as cathedrals, monasteries or universities. Towns, cities and boroughs could receive liberties such as fairs, local customs, tolls, land acquisition and, occasionally, incorporation and county status. Independence and self-government were sought and more frequently achieved in centers under royal charter than those controlled by the church or manorial lords, where liberties were more restricted.
Self-government was headed by the elite, usually of the mercantile class, who served city administration in various capacities and bore a great share of responsibility for civic welfare. Wealthy guild members often entered city government through their guild office. Craft and trade guilds regulated prices and standards of production, working conditions and wages, and provided support and solidarity for its members. They also contributed to a city’s social and religious life. Apprenticeship was long and expensive but could lead to citizenship and the advantages of a town’s liberties. Like guilds, fraternities were organized around a parish or church, and perhaps a craft, but were not involved in regulatory activities.
The country’s government was located at Westminster. It adjoined London, which was larger and more crowded than other cities. Visitors to the much smaller Westminster found lodging, food and entertainment in London, the skyscape of which was dominated by the Tower and cathedral spires. Secular and ecclesiastical nobility owned great town houses in London, some of which lined the Thames, and the royal family had splendid residences in the city, like John of Gaunt’s Savoy Palace, which was destroyed during the Rising of 1381.
The king headed the government, along with parliament, which was comprised of representatives from the nobility, church, and, increasingly in the fourteenth century, the second estate. The commons gained power, but the term must not be confused with the common people of the third estate. Rather, its members were knights, gentry, and influential burgesses, usually merchants. The royal courts were located at Westminster: The King’s Bench, Court of Common Pleas, the Chancery and the Exchequer. The king’s justice system, of which he was chief justiciar, extended throughout the country with traveling courts, and a network of judicial and peacekeeping officials, such as sheriffs, bailiffs, justices of the peace, and others. The judicial system was complex, with the addition of manorial courts that treated local matters, and ecclesiastical courts.
References to social strata and structure, lay and ecclesiastical, are assumed by the medieval author and audience but may need clarification for the modern reader, which can be offered only broadly here. Conceptually, medieval society comprised three “estates,” or groups: the clergy, who prayed; the knights, who fought; and the laborers who supported all three, particularly the two above them. There were levels within each estate, but by the fourteenth century it was apparent that the system, which in reality was never operative, could not accommodate the emerging classes, particularly the “middle class.” Though the three-estate system remained the ideal regardless of its obvious flaws, the social structure was far more complex, which is reflected in the literature, particularly that of the estates.
The lay hierarchy was, of course, headed by the king, followed by his magnates, or peers, of the nobility: duke, earl, baron. In the aristocractic class, the knights were superior to the knights bachelor and squires. Rural society ran from landed gentry and franklins and, late in the century, yeomen, to husbandmen and cottars, the manor’s workforce (the famuli), and the homeless. In the cities, there were the mayor, aldermen, great and lesser merchants, craftsmen, apprentices, laborers and servants.
Ecclesiastical structure and terminology is somewhat more confusing, since there were three main types of religious organizations and identities, though some of their functions overlapped and sometimes competed. The church in England was under the pope’s jurisdiction, though the crown and papacy had areas of conflict, particularly regarding taxation and preferment issues. The country was organized by province, diocese, archdeaconry, deanery, and parish, which were correspondingly under the control of the archbishop, bishop, archdeacon, dean and parish clergy. These were beneficed offices, which brought income through landholding and its produce; in this regard, the upper echelons were on a par with lay nobility and aristocracy. In the parish, only the rector (parson) held a benefice, which ranged greatly in value, while vicars, curates or chaplains, who stood in for rectors in their absence or incapacity, might receive a portion of the benefice or more likely a fixed stipend, as did the parish priests, many of whom were likely to be from the peasantry. Education of the clergy ranged from university learning at the high levels to tutelage under a village priest. Pluralism was the practice whereby a cleric held more than one benefice, or other religious and/or administrative posts in addition to his benefice. Some left parishes to serve in chantries, endowed chapels, which could be lucrative and created absenteeism in the parish churches.
In addition to the secular clergy were the regular, or cloistered, clergy. There were a number of orders, chiefly the Benedictines and Cistercians, which were supported by endowments and had monastic communities that held estates, many on a par with lay nobility, which required extensive administration. The monastic orders peaked in power by the end of the twelfth century, and the mendicant orders rose soon thereafter: the Franciscans, Dominicans, Carmelites, and Augustinians. The friars were not confined to monastic life but served the communities in which they settled, usually cities, and became active in education, as well as ministerial services. Although poverty was a basic principle associated with mendicancy, many houses became wealthy through endowments and benefactions.
There was conflict at the parochial level between mendicant and secular clergy over spiritual ministrations and the fees, endowments and benefactions attached. At the higher level, strain between the king and pope was often driven by economics. For example, each frequently had candidates for ecclesiastical positions such as bishoprics and their attendant revenues. Other issues such as both papal and state taxation of the clergy are less visible in the literature, but demonstrate that antagonism engendered by clerical competition ran the hierarchical gamut.
The history of the fourteenth century is marked with conflict and disaster, social and natural. Politically, there was imperfect judicial administration, conflict between king and parliament, clashing court politics and civil unrest, due in great part to monarchical instability. Edward II (1307-27) took over a throne burdened with domestic difficulties, heavy debt and war with Scotland. Magnates were concerned with controlling monarchical power and Ordinances were drawn to regulate administration, prevent oppression, and strengthen parliamentary participation. Ordainers were appointed to oversee the implementation of the regulations, particularly concerning taxation, royal expenditures and choice of councilors, but the Ordinances were not consistently followed or enforced and became a source of conflict.
Edward resisted counsel and displayed favoritism that was divisive and led to civil conflict, which undermined governmental administration. His ineptitude extended to military leadership, and his losses in the war with Scotland caused further civil and economic instability. Throughout his reign Edward was opposed by the peerage, and finally by his queen, Isabella, who led the action that resulted in Edward’s deposition. She and her lover, Roger Mortimer, ruled during the regency of Edward’s son and successor, Edward III, but there was no improvement until the new king took the rule at age eighteen.
Edward III’s reign (1327-77) began and ended darkly, but overall the monarchy was somewhat stabilized and strengthened. Edward created a new, supportive peerage and tended toward conciliatory rather than coercive politics. He redeemed some of his father’s losses in Scotland, though the conflict continued and victories ebbed and flowed.
The Hundred Years War with France (1337-1453) centered on lands held by England in fiefdom to the French king. Edward made claim to the French throne in hope of gaining land and power. Alliances shifted, treaties were made and broken, and the costs of funding the war were burdensome. The issues of sovereignty were not resolved during Edward’s lifetime, and he withdrew from participation when hostilities were renewed in 1369. In his elder years, he was surrounded by favorites, who were eventually impeached but later restored.
Internal conflicts and distrust were so strong when Richard II succeeded Edward that a council was appointed during his minority rather than an individual regent. Richard did little to restore stability but, like Edward II, ruled with tyranny, favoritism and foreign failure. Richard resisted parliamentary controls, the peerage was once again divided, and civil war loomed. There were some periods of calm during his reign, but conflict always ruminated and eventually gave Henry Bolingbroke the opportunity to seize the throne. Like Edward II, Richard’s rule ended in deposition.
Natural disasters, such as rains, floods, drought, earthquake and livestock disease, which were thought to be sent by God as punishment, appear chiefly in the literature as dearths and famine. There were a number of bad harvests throughout the century with dearths in 1321-22, 1350-52, 1369-70 and 1390-91. One of the most calamitous was the famine of 1315-17, brought on by years of excessive summer rainfall across northern Europe. In England, severity depended on geographic locale, but generally harvests of all types of grain such as barley and rye, and chiefly wheat, were way below average, nearly half or less in some regions.
Scarcity brought inflated prices; a quarter wagon load of wheat rose from 8s in the autumn of 1315 to over 26s by summer 1316. In some areas, prices exceeded 40s, and costs of other grains as well as salt and dairy products rose commensurately, and some large-scale landowners benefited from sales at the high prices. Less fortunate lords laid off workers or reduced wages, and peasants abandoned, sold or encumbered their land, which allowed wealthier peasants to exploit their poorer neighbors in order to increase their own landholdings. Almsgiving was reduced, and theft was common by the desperate poor, and the death rate from starvation and related illnesses such as typhus rose.
Livestock disease sometimes accompanied famine, as it did during 1315-17, and at other times as in 1325-26 during summer droughts. Flocks of sheep stricken with murrain were decreased by half in 1316-17, and again in 1321-22. Shortages caused food scarcity, and reduction of wool supply had far-reaching economic effects on both textile production and raw material exports, as well as royal income from taxation.2 Similar losses of cattle and oxen from disease occurred between 1319-21, so that horses were used to plough, which affected the ability to work the land. The combination of lost livestock and grains during the major agrarian crises was devastating, not only in mortality but in standards of living, which fell to poverty levels for many.
The Plague that struck England in 1348-49, with subsequent outbreaks throughout the century, decimated the population, though to varying degrees depending on location; overall estimates range from thirty to fifty percent. Although it accelerated trends already in motion, the effect of depopulation was tremendous on every aspect of life. Socioeconomically, it shifted the distribution and management of landholdings and labor, impacted the value of rural properties and produce, and jostled the social balance. Cultivation and crop production decreased, grain prices were depressed, prices for manufactured goods rose, rents fell and, most significantly, wages rose due to labor shortages. Villeins’ demands led to a mobile work force, and lord/tenant relationships were altered as the trend for accepting monetary payment for rents and fees in lieu of service increased. Many lords leased out their demesne lands to raise money necessitated by the rising cash economy, and overall the combined trends were not in their favor though, again, it varied regionally.
The governing class wished to protect their economic interests in the villeinage system and the monetary benefits derived therefrom. Attempts were made to stabilize the feudal system and the economic dynamism through legislation, such as wage freezes and prohibition against changing jobs. Lords continued trying to exert control and to literally keep villeins in their place. Despite government regulation, the law of supply and demand presided and the economic transition continued. Social relations shifted as peasants found more freedom of choice of occupation and manorial affiliation, the opportunity to increase landholdings and income, improved working conditions, and a measure of upward mobility, albeit restricted, that pressed on traditional stratification, both within the peasantry and upon upper levels.
One of the most signal events of Richard's reign was the Rising of 1381. Though other grievances were at issue, the catalyst was taxation. Amid opposition to the king's spending, two poll taxes were approved in 1377, one in 1379 and another in 1380, of increasing amounts. The taxes fell hardest on the poor, and the latest, due to be levied in 1381, was yet another exercise to raise funds for the war. This time it proved intolerable and was met with resistance which turned to violence.
The Rising was not a singular phenomenon but rooted in tensions that had been growing for some time; as Hilton observes, "It is important to remember that the 'popular' movements . . . did not erupt in an otherwise tranquil society. The social harmony, which ecclesiastical, political and social theorists constantly idealized in their writings and sermons, never in fact existed" (Hilton, Class Conflict 80).
Antagonism between the laboring classes and those in power were of long standing in both manor and town. Although the Rising is also known as the "Peasants Revolt” because it was born in some, not all, English rural counties, it should be noted that the rebel forces were joined by urban sympathizers whose grievances differed but who reacted similarly to what they perceived as oppression.3
Despite the enhanced economic opportunities and potentially reduced servility of the period, discontent and rebellion increased, as witnessed by the Rising. Scholars suggest that this discontent arose only not from poverty and oppression, but from the recognition of the possibility of an improved socioeconomic situation and the frustration of being hampered in the attainment of that advancement. Patterson agrees that the Rising was more the "outraged reaction of independent peasant producers to the seigniorial attempt to contain their growth" than the outcry of an "unbearably downtrodden peasantry" (253). This is borne out by the theory that although there were some poor participants, most were from the middling classes.
McKisack focuses on historical evidence which speaks of discontent due to economic factors. She cites court and manorial records that reflect the peasants' reluctance and/or refusal to perform work based on their wish to escape villein tenure, obtain casual work and earn higher wages (339). The issues, although they varied somewhat in specificity and intensity depending on the location, included the abolition of serfdom, free contract for labor services, low rents, freedom for peasant trade, repeal of oppressive legislation and the exploitation of landless and nearly landless laborers (McKisack 338).
But there were causes for the discontent to be found outside of the purely economic sphere. The rebels’ desire for freedom from oppression and exploitation, though manifested materially, involved degradation. Patterson suggests that it was the stigma of serfdom as a "permanent condition of sinfulness" and not simply the economic hardship of serfdom which fueled the Rising and all late medieval peasant revolts, but that the desire for human dignity was also at issue (264). When the Rising had been quelled, the king retracted the placatory agreement he had made to grant the rebels’ demands and their freedom and is reported to have said, “Villeins ye are and villeins ye shall remain” (McKisack 418).
1 Medieval hospitals were generally almshourses rather than medical facilities, though some cared for the sick, especially lepers.
2 Although England exported a number of products and commodities, wool was by far the most important. A long and complex trade network involved growers and producers, merchants, bankers and brokers, and the crown, and a system of cash and credit transactions. The crown gained export taxes, customs, duties, fines and subsidies, and in order to control wool trade and profits, the crown established compulsory staples, towns through which the wool had to pass. These towns, the domestic and foreign locations of which were changed repeatedly but finally settled at Calais, were under the control of merchants and a mayor, dominated by Londoners. This monopoly benefited the staplers who, in return for their privileged trade position, were then expected to lend money to the crown, repayment of which was secured through exemption from staple regulations.
3 For a fuller discussion of resistance and revolt before, during and after the Rising by both rural and urban workers, see the Introduction to the Rebellion section of this collection.
Return to Main Table of Contents | <urn:uuid:04bf10e1-7f29-478e-a897-718f7cc9d28d> | CC-MAIN-2017-51 | https://www.sfsu.edu/~medieval/complaintlit/hist_overview.html | s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948514113.3/warc/CC-MAIN-20171211222541-20171212002541-00227.warc.gz | en | 0.982572 | 4,963 | 3.15625 | 3 |
Why does my child cry so much?
Q: My son is 3.4 years old. He weighs 15 kg and his height is 100 cm. As parents we have observed that, though his other behavioural and growth pattern seem to be normal, he cries much more than other children of his age. He still sleeps with us. Every time he gets up (during afternoon or morning), he starts crying after 2-3 minutes. This is regardless of whether or not we are present in front of him. We do comfort him by hugging, but still he continues crying for about 10-15 minutes. We have tried giving him his favourite toy while sleeping, but it's of no help. If we don't pay any attention, then also he keeps crying. We do understand that as a child he must be doing it to get our attention, but as parents we just want to know if there is any particular way that we should handle him so that his crying can be reduced. Is it normal for a child to cry like this? Will this affect his growth in any way? He is lean and of average built. We have read an article on the Internet that over-crying may cause children to be lean than others. Is this true?
A:By the age of three years, a child's language is quite developed. You must ask him to tell you why he is crying. Let him know that only babies cry, before they have learned to talk. Once he can speak, talking should be a better way of conveying what he wants. Perhaps he has associated crying with being noticed by you. Convince him that you will pay immediate attention if he tells you what he wants, than if he cries for it. Start rewarding him (with smiles, hugs and picture books) when he uses speech to convey what he wants. Just to rule out any health problem, get him checked by a paediatrician. | <urn:uuid:60c52eab-2383-4796-9f84-c1ce064f9f05> | CC-MAIN-2017-47 | https://doctor.ndtv.com/faq/why-does-my-child-cry-so-much-11556 | s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934807344.89/warc/CC-MAIN-20171124085059-20171124105059-00163.warc.gz | en | 0.991485 | 388 | 2.703125 | 3 |
No big flu wave is expected this year due to measures in place to curb the spread of coronavirus, an expert said on Friday.
She warned, however, that people suffering from both flu and coronavirus could experience worse health problems.
According to assistant professor of paediatrics and infectious diseases at the University of Cyprus’ Medical School, Maria Koliou, this year was not expected to see a large wave of type A influenza.
This is due to the precautionary measures already put in place to curb the coronavirus, she said.
“Influenza A is also a respiratory disease,” Koliou told the Cyprus News Agency. She added that the mask also protects against the flu as well as social distancing and leaving windows open for good ventilation.
“They are all respiratory diseases, so prevention is similar,” said Koliou who also chairs the medical association’s special committee on tackling the coronavirus.
“Therefore, we do not expect to have a large wave of influenza A, unless something unpredictable happens like in 2018,” she said.
Although the restrictive measures imposed appear to have helped keep away the flu Covid cases continue to rise without abatement. Experts abroad explain this away with a phenomenon called ‘viral interference’, meaning Covid makes it difficult for other viruses to co-exist with it, they say.
Yet, Koliou warned that people who happen to fall ill both with flu and coronavirus are in for a tough time.
“We do not want both viruses to coexist in one patient, because they work together to cause problems for the patient,” she said. “If influenza A and coronavirus occur at the same time in one person, then things become more difficult for him or her.”
She added that this was another reason for people to get vaccinated for the flu, especially vulnerable groups as per the health ministry’s recommendations.
According to Koliou, no flu cases have been recorded so far this year in Cyprus. Countries around the world are showing a similar trend. Flu season in Cyprus, she said, starts either at the end of December or beginning of January. The only exception in the last nine years was in December 2018, when there was a serious outbreak which started in mid-December, she said. | <urn:uuid:bfee145e-c4c9-4302-a225-7db6d515f750> | CC-MAIN-2021-39 | https://cyprus-mail.com/2020/12/04/no-big-flu-wave-expected-this-winter-expert-says/ | s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780056578.5/warc/CC-MAIN-20210918214805-20210919004805-00067.warc.gz | en | 0.973453 | 491 | 2.8125 | 3 |
John Knox & Queen Mary
John Knox and Queen Mary
When the reformation finally arrived in Scotland, the old Catholic faith did not collapse overnight - the process of change took place gradually over a period of years.
Part of the reason for this was that, while firebrands like John Knox were desperate to move Scotland towards the Protestant faith, the Scottish rulers were happy with Catholicism and wanted to see it stay.
The battle between John Knox and Mary Queen of Scots was one of the most fascinating tussles between two strong characters in Scotland's history?.and it was a religious war in which Knox would eventually end up as the winner.
When he was sent away to France to work as a galley slave after his part in the murder of Cardinal Beaton, it seemed that the Reformers had fired their best shot and missed. But Scotland was still a highly unstable place, and dissatisfaction with the Catholic church was rampant. It was virtually inevitable that change would come, and that when it did, it would alter the nature of Scotland forever.
Henry VIII, who had converted England to Protestantism in 1534 by establishing the Church of England, was keen that the infant Mary Queen of Scots - born just a week before James V died in 1542 - should marry his five-year-old son Edward, so uniting the two crowns and effectively bringing the Scots under English control.
However, Henry had not reckoned on the opposition he faced from Mary's formidable Catholic mother, Mary of Guise, who opposed the match and eventually forced its cancellation. The result was that a furious Henry invaded southern Scotland and razed towns and border abbeys in a so-called "rough wooing".
Knox and the Reformers recognised that their success depended to a large extent on forging alliances with the English. At the same time, however, the council which was ruling Scotland in the infant Mary's name - which included the French-born Mary of Guise - felt that Scotland's best hope lay in protection from another Catholic nation, France.
When the English attacked Scotland again in 1548, the Scots asked the French to intervene. They sent 7000 troops, but would only agree to use them if the Infant Mary Queen of Scots was betrothed to the future king of France, the Dauphin, Francois.
The aim was clear - to bring the two Catholic nations together under a united French crown - but the Scots were clever enough to allow Francois only her hand in marriage, and not the Scottish succession.
John Knox, meanwhile, had finished his sentence in France and gone to England to try and further influence its Protestant conversion. However, Henry VIII had died, and when his son Edward VI died also, Henry's daughter Mary took the throne. She was a Catholic and, as she attempted to move her country back towards the old faith, Knox fled in fear of his life to the Continent.
He went to Switzerland, where in Geneva he met and heard the preaching of fellow reformer John Calvin. Calvin was a hardline, no-compromise firebrand who believed that the Bible was the only true source of religious truth. It was a much harder type of Protestantism than the Lutheranism on which Knox had cut his theological teeth, and he warmed to it and vowed to take it to Scotland.
Just what type of a person, though, was Knox? It is clear that he thought of himself as the father of the Scottish Reformation, but in reality the change was happening in any case without his presence north of the border.
Father Mark Dilworth, the author and historian who is a former keeper of the Scottish Catholic Archives and an expert on the period, believes Knox may not have been an influential as is popularly thought.
"Most of what we actually know about Knox comes from what he tells us in his own writings", Dilworth says. "He was certainly a strong figure, but he may have magnified his own importance. There is a suspicion among historians that he was an extremely good self publicist, and he may not actually have been as important as people think."
Knox wanted to return to Scotland and tested the water with a couple of preaching visits. By 1559, he felt it was safe to come back for good. By then, the Reformation north of the border was in full swing and, despite Mary of Guise's influence, Scotland's nobles had swung behind Protestantism.
Five of them had titled themselves the Lords of the Congregation and made a covenant to overturn the Roman church and install the Protestant faith instead. Others flocked to their cause, and the tide turned in their favour in 1558 when Mary Tudor of England died and was succeeded by the Protestant Elizabeth I.
Because of this, Mary of Guise once again began to feel vulnerable, and demanded that all Protestant preachers appear before her and declare their allegiance to Rome. Unsurprisingly, none bothered to turn up, so she tried to ban them.
It was a losing battle. More and more Scots were signing up to the Reformed faith, and when Knox returned, he became ordained as Minister at St Giles in Edinburgh. His brilliant preaching abilities had the ability to stir people into action, and when he delivered a sermon in Perth, the mob rioted for two days and destroyed not only most of the fittings in the church, but also two monasteries and an abbey.
Mary of Guise reacted with horror and ordered her forces to march on the Reformers. But the Protestant nobles were also determined to strike while the iron was hot, and they occupied St Andrews and sacked the magnificent cathedral there. Scotland was virtually in a state of civil war, with Knox and Mary of Guise at the heart of it.
Again Mary of Guise - whose daughter had married the French Dauphin the previous year - waited for her French allies to arrive and bail her out. But Queen Elizabeth, who was worried about French claims that Mary Queen of Scots was the successor to her own throne, decided to back the Scottish Reformers.
As a result, the English fleet was sent to besiege the French, who were garrisoned at Leith. The French fought back, but then there was an incredible twist to the tale - Mary of Guise suddenly died. The French then surrendered and concluded peace terms with the English in the Treaty of Edinburgh - a move which effectively marked the end of the Auld Alliance between Scotland and France.
Under the terms of the deal, a council of 12 people was charged with the responsibility of governing Scotland during the absence of Mary Queen of Scots in France, though Mary herself was allowed to choose her own faith. Crucially, however, it gave the Scots parliament real power and the opportunity to call the shots in favour of Scotland's reformed faith.
Needless to say, they took it. The parliament quickly abolished the authority of the Pope in Scotland, and laid down a rule that anyone who claimed his supremacy would be exiled and lose their possessions. The public celebration of Mass was forbidden and John Knox was asked to mastermind a new declaration of the Reformed faith, which came to be known as the Scots Confession.
However, in France, yet another astonishing twist to the drama was unfolding. The husband of Mary Queen of Scots, by now the French king Francois II, had died of a septic ear. Mary was only 17, and grief stricken. Her advisers thought the best course of action was for her to return to Scotland - the country she had last seen at the age of five.
In August 1561, Mary sailed back to her native land. A devout Catholic, she was returning to a kingdom where the Protestants now had the whip hand. With Knox now at the height of his power, it seemed like a formula for division, bitterness and disaster. Which, of course, it was. Follow the story of Mary Queen of Scots
- 1542 Antonio da Mota is the first European to enter Japan
- 1557 An influenza epidemic breaks out across Europe
- 1557 "The Sack-Full of Newes" becomes the first English play to be censored
- 1558 Europeans adopt the English habit of taking snuff
- 1559 The University of Geneva is founded
- 1560 Madrid becomes Spain's capital
- 1560 Jean Nicot imports the tobacco plant into Western Europe
- 1561 The Basilica of St Basil in Moscow is completed
- 1561 The first Calvinist refugees from Flanders arrive in England | <urn:uuid:affc557c-b463-4896-a96a-cfa60934a869> | CC-MAIN-2019-43 | https://www.scotland.org.uk/history/knox-mary | s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987763641.74/warc/CC-MAIN-20191021070341-20191021093841-00073.warc.gz | en | 0.985866 | 1,734 | 3.421875 | 3 |
Report Ranks States on Opportunities to Learn
Nationwide, the opportunities for poor and minority students to attend a high-performing school are only about half what they are for white students, says a national study out today.
The report by the Cambridge, Mass.-based Schott Foundation for Public Education ranks all 50 states on the basis of student achievement and the percentages of students from historically disadvantaged groups attending the state's top-performing public schools.
Only eight states fared well on both counts: Vermont, Maine, New Hampshire, Minnesota, Oregon, Washington, Idaho, and Virginia. It may be worth noting that these are all states with comparatively low populations of African-American, Latino, and Native American populations.
The 10 states at the other end of the scale—in other words, the ones that got low scores for both proficiency and educational access—were somewhat more mixed in that regard. They are: Missouri, Texas, Rhode Island, Illinois, Michigan, Arkansas, Arizona, Nevada, West Virginia, and the District of Columbia.
Here's one surprise in the study: Some wealthy, typically high-achieving states, such as Connecticut, Massachusetts, and New York, scored near the bottom on the foundation's overall opportunity-to-learn scale.
And here's another: Judged solely on the basis of disadvantaged students' access to the best schools, Louisiana ranks first. But the report also says that finding may be a bit skewed because the state's public schools have disproportionately high proportions of black, Latino, Native American, and low-income students, and large percentages of white, middle-class students enroll in private schools there.
The full report, titled "Lost Opportunity in America," also contains statistics on disparities, within and among states, access to early-childhood education, high-quality teachers, instructional materials, and a college-preparatory curriculum. Check it out here. | <urn:uuid:7baf1c01-f1fa-4731-be0a-62555816317b> | CC-MAIN-2016-50 | http://blogs.edweek.org/edweek/inside-school-research/2009/05/report_ranks_states_on_opportu_1.html | s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698541696.67/warc/CC-MAIN-20161202170901-00272-ip-10-31-129-80.ec2.internal.warc.gz | en | 0.944395 | 386 | 2.734375 | 3 |
Blooms of phytoplankton color the water along the coast of Washington and British Columbia both south (right) and north (left) of the Strait of Juan de Fuca in this image, captured by the Sea-viewing Wide Field-of-View Sensor (SeaWiFS) on Friday, July 23, 2004. Without corroborating data collected at sea level, it’s impossible to tell which species of phytoplankton are coloring the water or if the bloom is harmful. This is, however, an area known to be afflicted by harmful algal blooms, and the Washington State Department of Health recently closed many of the beaches shown in this image to shellfish harvesting, a sign that the bloom may be harmful. Data such as those represented by this SeaWiFS image could be potentially useful to coastal managers seeking a broader view of water conditions in the region.
Image provided by the SeaWiFS Project, NASA/Goddard Space Flight Center, and ORBIMAGE | <urn:uuid:cfaea706-6d31-4fb1-b8ef-de937af6400c> | CC-MAIN-2020-29 | https://earthobservatory.nasa.gov/images/13563/algal-bloom-in-the-pacific-northwest | s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593657163613.94/warc/CC-MAIN-20200715070409-20200715100409-00250.warc.gz | en | 0.945676 | 208 | 3.5625 | 4 |
Subsets and Splits